Table of Contents
Page 2 – Abstract
Page 3 – Introduction
Page 3 – Social Engineering Overview
Page 6 – Social Engineering Fundamentals
Page 7 – Types of Social Engineering Techniques
Page 12 – Social Engineering – The threat it is today
Page 14 – Persuasion and Influence techniques
Page 16 – Multifaceted defense to combat Social Engineering
Page 21 – Conclusion
Page 22 – Bibliography
Social engineering refers to the process of using deception to manipulate individuals into giving away personal and confidential information. Despite social engineering being a persistent threat to a company’s confidentiality integrity and availability throughout many years, it remains one of the most formidable. Verizon’s most recent Data Breach Investigations Report (DBIR) reveal that around 90% of recorded data breach incidents, started off with a phishing email (Verizon, 2016). Over the course of this paper, an overview of social engineering will be given, along with the fundamentals of how social engineering works. It will then go into the persuasion and influence techniques that make social engineering so effective. And to conclude this paper will put forward a multifaceted defense to combat social engineering, countering the psychology that allows social engineering to be such an effective tool.
Living at the height of the Information Age means information security has never mattered more. With a greater amount of people and businesses going paperless, there is an ever-increasing need and demand to keep digital information secure. The CIA triad or the Confidentiality, Integrity and Availability of a company’s data is a general model designed to guide security policies for information security inside an organization. Confidentiality refers to the privacy of data – making sure that only people who are allowed to access data are able to access it. Integrity refers to maintaining a data’s accuracy and trustworthiness – making sure that data cannot be altered by an unauthorized person. And lastly, availability refers to the ability to access data when an authorized person tries to access it.
Cybercrime is a billion-dollar industry, which is built upon hackers breaking an organizations CIA triad. This includes but is not limited to: selling classified information after violating a company’s confidentiality, deleting important research data, setting a company back and allowing another to triumph – violating a company’s integrity – or locking an organization out of their own servers, using ransomware and denying the data’s availability. As a result of cybercrime being such a lucrative industry, there are a myriad of techniques that hackers use to break into a company.
Social Engineering Overview
Social Engineering is one of many dangerous threats to information security. Social engineering uses psychology to deceive and manipulate people, with the goal of extracting personal and confidential information from unsuspecting victims, which may then be used for fraudulent purposes (Europol, 2017). Tetri & Vuorinen (2013) split the act of employing a social engineering technique into three parts: the initial act of intrusion, the social aspect and preparing and carrying out the intrusion, and finally the acquiring of something.
Social engineering can be used to achieve many things, ranging from something as simple as getting peoples login credentials to an online game and taking over their account, to gaining access to an organizations network, granting the engineer access to valuable, confidential information. This paper will focus on the latter, and more malicious type of social engineering. However, to demonstrate what social engineering looks like, let us take a brief look at the former example:
Low level: The hacker messages a person playing the game, impersonating an employee of the game’s company. The hacker claims something is wrong with the player’s account and that they require their username and password to fix it.
Mid level: The hacker messages someone playing the game, impersonating an employee of the game’s company. This time they message with a limited time offer of something appealing to the player, but that they have to act fast, or else the offer will expire. The social engineer will then attempt to get the player to tell them their secure information and attempt to access their account.
High level: The social engineer messages the player saying something is wrong with their account and they will be banned unless they fix it by going to a website. This site will look identical to the game creators, but will instead be controlled by the hacker. When the user inputs their login information, they will in fact be transmitting it to the hacker, thereby granting them access. From here, the social engineer can change passwords and completely take over the account.
While the above could be considered a seemingly mundane and innocuous use of social engineering, this can be easily adapted to steal a person’s log in credentials, such as for emails or online banking. The PayPal scam last year was essentially the same as the above High Level example where hackers, using social engineering, sent people phishing emails claiming to be from PayPal saying that their account was closed and they needed to click the following link which directed users to a realistic PayPal site controlled by the hackers (Reeder, 2016). When users typed in their log in information in an attempt to reopen their accounts, they gave their secure details to the hackers, who were then able to input the victims’ log in credentials into the genuine PayPal website, and access their accounts.
To use Tetri & Vuorinen’s (2013) terms, the initial act of intrusion was the sending of the phishing email, the hackers then prepared and carried out the intrusion by creating a fake website and directing the victim’s there, and finally acquired multiple login credentials. This violated the whole CIA triad, as the PayPal accounts were no longer confidential, their log in credentials were changed, which violated the integrity of their accounts, and as a result were not available to the users. This shows just how powerful a tool social engineering can be as it can directly or indirectly violate the confidentiality, integrity and availability of data.
There is a debate going on at the moment discussing the best way to counter social engineering. Some believe it should be through technological means such as filters and scanners, so that the user is never even aware of the attempt of social engineering to begin with (Peltier, 2015). The other side of the debate focuses on the human aspect and asserts that education and making people aware of social engineering tactics is the best way forward.
While a degree of technological counter measures can be effective, especially when it comes to filtering emails to counter phishing, focusing on producing technological counter measures alone to resist social engineering attacks is not enough as this does not address the main problem. Because social engineering preys on the susceptibility of people, educating and making employees aware of social engineering techniques needs to be a part of the solution. This paper proposes a multifaceted approach of using both technological measures and awareness training to counter the threat of social engineering, with the focus being on the latter.
In order to help ensure corporate information security, it is essential for employees to be aware of the many different social engineering techniques that can be used against them (Thornton, 2016). If employees are educated and aware of the plethora of attacks that can be used against them, they will be more prepared to deal with and recognise such attacks when they arise (Thornton, 2016).
Social Engineering Fundamentals
There are a variety of different social engineering techniques that are used today. In order to use them, a social engineer first needs to develop a sense of trust with their target. This can be done through a multitude of ways, such as pretending to be an employee (tailgating), reciprocation by way of exchanging favours, and more (Granger, 2001). This will then open up a communication channel that the hacker can exploit, asking for small favours at first, and gradually increasing the size of the favour until the employee does not even realize how much information they are giving away and to whom they are giving it away to. This is successful, especially when calling places like customer service, or help desks because they are designed to be helpful and not question the authenticity of every call – if every call was questioned, it would become an extremely inefficient service, which could in itself be an attack, causing a denial of service for the customer service phone lines.
Lexihut (2016), found that using technology to change ones voice to sound more female lowers a targets guard, as females are believed to be more successful at persuasion, and less likely of a threat. As this technology is very cheap, it is a small price to pay for what could make the difference between a successful and unsuccessful attack.
Trust can also be achieved through a method known as reverse social engineering (Granger, 2002). Reverse social engineering refers to when a hacker creates a problem for an organization, and then makes himself available to fix it. Upon the social engineer’s arrival, the target is so grateful for the help that they are already to perform favors for the social engineer (Rusch, 2000). After they solve the problem, the target is now in the hacker’s debt and will undoubtedly be willing to reciprocate and do a favor for the hacker. While this requires a lot of planning and research, it is clearly a highly effective tool for establishing trust.
There are other methods of social engineering, and establishing trust, such as phishing and tailgating. The next section will describe these in more detail.
Types of Social Engineering Techniques
Phishing scams are the most common social engineering scams being used today. The majority of phishing scams will generally be in the form of an email and try to extract personal information such as the targets name, address, bank account details, and more (Bisson, 2015). Most phishing scams will utilize a sense of urgency – that the reader needs to act in a certain amount of time and that something bad will happen as a result, e.g. missing out on an opportunity or – in order to make the reader panic and act without thinking.
Other phishing scams will pretend to be from a well-known company and give you a URL to click on the looks genuine, but will instead send the reader to a website controlled by the hacker (Bisson, 2015). This website could then either use drive-by code installs which will install malware on your computer, or impersonate a known website in order to trick you into entering personal account details. Phishing emails are generally sent to a large number of people and may be poorly constructed, going for quantity rather than quality. Upon receiving responses to emails, they will then become more carefully crafted, having analysed the emails and seeing potential targets that they may be able to exploit (Cate, 2016).
Spear phishing is essentially a more targeted version of phishing. As opposed to casting a wide net and reeling in a lot of people, spear phishing is targeted at specific individuals. For example, a phishing email will target a large number of people with the salutation “dear user,” spear phishing however will be more targeted and use people’s names, which makes the email seem more legitimate (Cate, 2016).
Vishing, also known as phone phishing, uses voice solicitation to extract information from unsuspecting people (Žukina, 2015). This technique can be used in multiple ways, from soliciting people at home to extract personal information such as account details by using a fake interactive voice response (IVR) that tricks users into believing they are communicating with their bank or other account, to vishing support staff that work at companies where people have accounts (Žukina, 2015). This can result in these employees giving away information about account holders, and in some more extreme cases, can cause the hacker to gain control of other people’s accounts.
Lexihut (2016), provides an example of the latter. Using a phone with the number spoofed to look like it’s coming from the number registered to the account and an audio clip of a crying baby, she manipulates a sympathetic customer support staffer working for a cell phone provider company, to give her details about an account she claims belongs to her husband (Lexihut, 2016). She then goes on to add herself to the account and creates a new password, effectively locking the true owner out of the account and granting herself full access (Lexihut, 2016).
It is important that customer support staff are educated and warned against such vishing attacks. There needs to be strict protocols in place that aid in prevent such attacks, which employees can adhere to when such a situation arises.
Baiting is very similar to phishing. The main difference between them comes from the hook or “bait” that the hackers use (Thornton, 2016). The bait can be either physical, or digital in nature. Common examples include storage devices such as CD Roms or USBs, or digital downloads such as free MP3s or videos (Bisson, 2015). The content of the bait is carefully chosen using psychology in order to tempt people into taking the bait by appealing to their greed or curiosity, for example a USB with a label implying that the USB contained employee salary information (Žukina, 2015). This would appeal to both an employee’s curiosity and greed, as they would want to see what their co-workers were being paid, as well as whether they could be getting paid more money.
Stasiukonis (2016), used such baiting techniques to test a company’s security. This was achieved through the medium of 20 USB sticks planted in the parking lot of the company being tested (Stasiukonis 2016). Of the 20 USB Stasiukonis (2016) planted, 15 of them were found by employees of the company in question and were bought inside and plugged into company computers. Upon being plugged into the company computers, the USBs installed malware such as key loggers, which enabled Stasiukonis (2016) to view employee login credentials, as well as other secure information. This attack preys on human’s innate curiosity, a subtle but powerful psychological motivator (Lowenstein, 1994).
Quid Pro Quo
Quid Pro Quo, is similar to baiting in that the hacker offers something in exchange for information. However with Quid Pro Quo attacks the hacker offers a service, as opposed to a good (Thornton, 2016). A common attack is when a hacker, claiming to be from technical support, spam calls as many direct numbers of a company that they can find (Bankvault, 2015). They do this until they find someone who actually requires technical assistance, and then under the guise of providing assistance, get the user to perform acts such as disabling their antivirus or installing a fake software update, which installs malware instead (Bankvault, 2015).
A survey undertaken by InfoSecurity Europe 2003, tested the security awareness of office workers at London Waterloo Station (BBC, 2004). They found that approximately 80% of workers would give away their computer passwords for a cheap pen, up from 65% the previous year (BBC, 2004). Although this survey is lacking in important details – the passwords were never tested to see if they were genuine – the high percentage is still troubling (BBC, 2004). While not all passwords will have been genuine, the high percentage strongly suggests that at least some people gave away their passwords for a cheap trinket. (BBC, 2004). Similar surveys have been performed, achieving similar results, such as exchanging passwords for a bar of chocolate or a small sum of money (BBC, 2004).
Pretexting is a form of social engineering in which attackers attempt to instil a sense of trust in their victims, in order to get the end user to willingly send the hacker personal or company information (Bisson, 2015). Hackers can achieve this by impersonating a co-worker, or someone with more authority than the victim, such as the head of IT, or even going so far as to impersonate the chief executive officer or chief financial officer of the company.
However, this social engineering technique is not just for employees to be made aware of, but rather anyone who uses a computer or accesses the internet. Dewey (2014), cites a story of people posing as recruitment agents for model or escort agencies. These scammers, under the guise of recruitment agents, then manipulated underage girls and women into private messaging pictures of themselves in various states of undress, for the hackers to show the agencies that they supposedly represented (Dewey, 2014). The fake recruiters ultimately released these pictures online for the world to see, in some cases even finding the subject of the pictures on social media and posting it on their pages (Dewey, 2014).
Although this paper will focus solely on the professional side – educating employees to recognize and counter these social engineering techniques to protect corporations from data breaches – it is important that we recognize social engineering for the threat it is, and start educating the general population about this, starting from a young age. That way by the time they are old enough for social engineering tactics to do some damage, avoiding and recognizing these tactics will already be a learned, automatic behaviour.
Tailgating, also known as piggybacking can be a very simple, but highly effective social engineering technique that can be used to gain access to restricted areas or buildings (Bisson, 2015). Typical techniques include having your hands full, for example with a few coffee cups, in order to trick overly-helpful employees into holding open a door for the tailgater, which would otherwise require a key card or some other security measure to get in (Thornton, 2016).
Using tailgating techniques, security consultant Greenless (Chapman, 2009), managed to gain access to a client company without being challenged by security. He was then able to set up, and work out of a meeting room for several days, during which time he manipulated employees into sending him secure information by calling them using the internal telephone system (Chapman, 2009). This is another example of pretexting. By calling employees from the internal telephone system, he made them believe that he was working for the company, and thereby a trusted party. As a result of these social engineering techniques, Greenless was able to get 17 out of 20 users to give him their usernames and passwords, granting him access to secure company data (Chapman, 2009).
Social Engineering – The threat it is today
According to the Verizon (2016) Data Breach Investigations Report (DBIR), social engineering has increasingly become more of a threat, with phishing continuing its rise in use, with a total of almost 10,000 incidents last year alone – one tenth of which had confirmation that confidential data had been disclosed (Verizon, 2016). Furthermore, the DBIR conducted an experiment to see how many people fell victim to simulated phishing attacks (Verizon, 2016). Of the population tested, approximately one in seven participants clicked on and opened phishing attachment. This is a non – statistically significant increase from one in eight people downloading a phishing attachment the previous year, but is still a worryingly high statistic (Verizon, 2016).
Furthermore, there was a vast increase in the number of people who opened the phishing email – 30% of tested participants opened the phishing mail, a statistically significant 7% increase from the previous year, where only 23% of participants opened said email (Verizon, 2016). This clearly shows that social engineering techniques such as phishing are still a major problem, and steps need to be taken to reduce their effectiveness.
Verizon reports email attachments to be the number one delivery vehicle for malware (Verizon, 2016). The most common delivery mechanism is an email attachment, followed closely by web drive-by attacks, with a hybrid of both being third, i.e. an email with a link to websites which have drive-by code installs (Verizon, 2016). With email attachments being the number one source of malware downloads, it is more important than ever to work on combatting social engineering, so that employees are able to recognize these fraudulent emails for what they are.
With Verizon’s DBIR recognition of the threat phishing poses, as well as a growing awareness of social engineering, it is not uncommon for companies to have some sort of social engineering awareness training scheme. Heath (2017), Chief Information Security Officer (CISO) at a major energy company, recently ran a phishing awareness scheme for 18 months at Aecom. When the programme first started, they sent initial simulated phishing emails to their employees and had a 30% click rate – three out of ten people opened the simulated phishing email (Heath, 2017). After 18 months of this programme the employees click rate fell to 6.7% – a decrease of over 300%.
While a 300% click rate decrease is impressive, such simulated phishing emails are often extremely similar in format and content to the initial phishing email and as a result are unsurprisingly ineffective the second time around (Ortutay, 2015).
While Heath (2017) had an annual awareness training program, it wasn’t mandatory. Positive punishment commonly misnomed as negative reinforcement, a three-strike system, where if they fall for a phishing email three times have to attend a mandatory seminar.
Since social engineering cannot be countered by a technological approach alone, this paper suggests using a multifaceted approach using technological prevention methods, such as filtering and scanning, combined with an educational and awareness training program to combat the ever-increasing threat that is social engineering. Furthermore a good security policy is required, and the use of Social Engineering security landmines.
Persuasion and Influence techniques
Before a social engineering resistant plan can be developed, we first have to understand the psychology behind social engineering, in order to combat it through a multifaceted defence.
Authority – People are conditioned, in the right situation, to be highly responsive to assertions of authority, even when the person who claims to be in a position of authority is not physically there (Rusch, 2000). Cialdini (1993), investigated this in a study involving 22 separate nursing homes. Each nursing station were contacted by a researcher who falsely identified himself as one of the residents’ doctor and ordered the nurse who picked up the phone to give 20 milligrams of a prescription drug to a specific patient Cialdini (1993). In 95 percent of cases the nurse obtained the necessary dosage and attempted to administer it to the specified patient, before being stopped and told that she was merely the subject of an experiment Cialdini (1993).
Reciprocation – A well known social convention is the act of reciprocation; if person A gives person B something, it is a natural human response to reciprocate and for person B to feel a strong urge to give person A something in return (Rusch, 2000). Even when person A’s gift was given unasked, person B will still feel a strong urge to give something in return when person A asks a favour, even if that favour is something far more valuable than the gift. Wong (2017) found that people were willing to part with their passwords, for something as simple as a free slice of pizza.
An example of reciprocation can be found in the earlier example of reverse social engineering (Granger, 2002). The hacker comes in ready to save the day, by solving a problem of his own creation, and the victim will feel indebted to the attacker even before the problem is fixed (Rusch, 2000). This is exactly what the attacker wants, as they now have a person on the inside, and have access to the targets network. From here, while pretending to fix the problem, they can create a backdoor of some sort, by installing malware or by other means, and with the victim feeling in debt to the hacker, has a person ready and willing to help the hacker with whatever they need.
Overloading – Overloading refers to a person being overloaded with a bombard of information, especially false information mixed with convincing truths in a short period of time (Rusch, 2000). When processing a sheer amount of information in a short time, people revert to a “mentally passive” state (Cialdini, 1993), and start to only absorb information and are unable to analyse it.
Another way to overload someone is to argue a point from an unanticipated point of view. This results in the social engineer blindsiding the person, and results in the person being taken aback, needing time to process the novel point of view, but not being given enough time to process it. This similarly results in the victim accepting false statements as true, even though these claims should have been questioned, merely because there is not enough time to process the new information.
Strong Affect -A similar persuasion technique to overloading, social engineers use strong affect to instil an elevated emotional state in their target to make them think less clearly (Rusch, 2000). The social engineer achieves this by making an assertion at the onset of the interaction that triggers strong emotions in their target, such as a strong sense of anger, surprise or anticipation. An example of this would be that the Social engineer calls an employee and impersonates a person of high seniority in their company, whose voice they wouldn’t know. They could then ask the target to do something of extreme importance, and if not the hacker would fire the employee.
Scarcity – A sub group of strong affect, scarcity relates to specifically something that the social engineer is offering the target, which is available for a limited time only. Even though the odds of winning the prize are remote, the surge of strong emotions, and the panic at missing out win over, and can lead the target to give away confidential and important information that they would not otherwise disclose (Rusch, 2000).
The above techniques represent a few of the main psychological tricks that social engineers play on their targets, in order to get them to give away confidential information.
Multifaceted Defence Combatting Social Engineering
Now that the vulnerabilities and weaknesses that social engineering exploits have been evaluated, we can now begin to discuss a defence model for countering such tactics. Since there are a veritable plethora of social engineering techniques, a multifaceted defence is required, in order to attempt to combat most if not all of them, and reduce their effectiveness.
A company without such a defence will face a bombardment of attacks and while some will fail, without a defence, the hacker will eventually gain access to the organization and start wreaking havoc.
First and foremost, a company needs to have a security policy that addresses the threat that social engineering poses. A security policy sets out the security standards and constraints put upon employees who sign it. This covers a broad range of topics from access control to a computer security policy – What employees are and aren’t allowed to do on company computers. In order for a security policy to be effective, and followed by employees, a security culture needs to be fostered by management.
Security culture refers to the beliefs and approach a group of people takes towards security. A beneficial security culture can be developed by doing security awareness training – making employees aware of security issues and how to overcome them, i.e. not just giving them a contract and telling them to sign it, but by using active teaching methods, such as activities.
A common clause in a security policy refers to forced, frequent, password changes. While there is a case to be made for regular password changes, they generally pose more of a threat than a solution. There are many security specialists, and government organisations such as the Federal Trade Commission (FTC) in the US, and the National Cyber Security Centre (NCSC) in the UK, who are in agreement that forced, regular password changes do not provide the security benefit that was previously thought.
The dangers of enforcing regular password changes are numerous, and include that a weaker password will generally be chosen, or that the passwords will generally be similar to the old one so it’s easier for the user to remember, this will result in attackers being able to guess the new password much easier (NCSC, 2015). In addition, some people report that people will just stick post-it notes on their monitor with their passwords on them, in order to remember their frequently changing login credentials (Cranor, 2016). If a company was infiltrated by a tailgater, this would give rise to a major security breach and the tailgater would be able to easily access an organizations computer network.
To prevent people like tailgaters, access control mechanisms are key. Lots of companies are using smart cards to protect and monitor access to a building, and in more advanced access control systems, who has accessed certain rooms. For smaller companies, this may not be necessary, as smaller companies are generally more close knit, where everyone knows everyone.
Once the security policy has been set out, all employees need to undergo active awareness training. Employees must be told about the dangers of social engineering, and how easy it is to fall prey to their tactics. In addition they must understand the possible consequences that one small action can make. For example, one person downloading an attachment from a phishing email onto a company computer could have disastrous consequences, which could in extreme cases make a company go under, and therefore the employee out of a job (Granger, 2002). This needs to be made clear to all employees, so that they understand just how important it is to be vigilant at all times.
For employees not to fall prey to such tactics, they need to be told specifically how social engineers work. Employees should be able to identify what confidential information is and protect it. Most importantly, they need to be able to say no when required, and be backed up by management in cases where an innocent person may be suspected of attempting social engineering techniques (Rusch, 2000).
After awareness training, companies should regularly send their employees fake phishing emails, to test their employee’s vigilance. In order to maximize compliance we will use operant conditioning practices, using both positive reinforcement and positive punishment techniques (Commonly incorrectly referred to as negative reinforcement). Positive reinforcement refers to encouraging a desired behaviour by offering reward when the behaviour is performed (Skinner, 1953). Positive punishment refers to presenting an unfavourable stimulus after performing an undesired behaviour or action (Skinner, 1953).
By way of positive reinforcement, employees will be rewarded for their behaviour of reporting phishing emails as malicious attempts at hacking, and will have a score showing how many phishing emails they’ve correctly reported. In order for employees not to just label everything as phishing, there will be a value of -1 added to their score for each phishing email they incorrectly label as spam. At the end of each month, top scoring employees could be entered into a raffle for a Gift card, or all-expenses paid trip, depending on the company’s expense account.
By way of positive punishment, employees could be ‘punished’ by having to attend a mandatory awareness training seminar after falling prey to three phishing e-mails. A three strike system is useful here, because even if they fall prey to fake phishing emails sent by the company, no harm will have been done. Upon downloading an attachment, they will be taken through a brief online course showing them where they went wrong. However because of the ineffectiveness of such courses, after three repeated failed attempts, a mandatory in person seminar will be required.
Such phishing emails will act as ongoing reminders, such that nobody ever forgets the threat of social engineering. If the seminars are just a one-off thing then people will rapidly forget, so it is important to constantly remind employees by sending them such phishing emails (Granger, 2002).
Social Engineering Land Mines (SELM)
Social Engineering Land Mines (SELM), are traps set up to identify and expose a social engineering attack (Whitman & Mattord, 2016). These traps are setup in such a way that in can stop an attack mid-way and possibly result in the social engineer being caught.
The Know It All – This is a social engineering land mine, in the form of a person. This persons job is to know everyone who is on their floor and walking around their department (Whitman & Mattord, 2016). With the use of an access control mechanism using key cards, this person’s job can be made much easier and they may be able to catch a social engineer attempting to tailgate their way into an organization’s building.
Call Back Policy – A fairly well-known and used policy is the call back policy. This policy will defeat people spoofing other people’s numbers and attempting to reset passwords, as the target will attempt to call the social engineer back, by using the registered number, but the social engineer will not be able to answer, due to them not actually possessing the phone with that number. This is also a useful technique to attempt to counter the psychology behind social engineering, by allowing the target to think about the requests that the social engineer is making (Rusch, 2000).
Stop and Think policy – Finally the stop and think policy requires employees to put people on hold, when requesting services such as password changes. The psychological research into social engineering asserts that one of the things that makes social engineering so effective, is that people can become overloaded when there is a sense of urgency, surprise or pressure (Rusch, 2000). The stop and think policy counters this tactic, as this then gives the employee time to think about the request, and the ability to consult with management if required.
Incident Response layer
The best defence is a good offense. Even the best defence in the world will crumble without a strong offence. With no offence, blow after blow can be levied against the defence, and while a strong defence will last awhile, a weakness will eventually be found and the attacker will get in.
The same principle applies to social engineering. There needs to be an offense which can help the defence. This can be achieved through an incident response department. Without such an offensive layer, a social engineer will get better at navigating the organizations defences as they will learn something new with each attempt, until finally they will be able to break through the defences and infiltrate the organization.
The incident response layer stops social engineers from navigating the organization’s defences, because as soon as a social engineer is revealed, an incident report will be filed with the hackers tactics noted and distributed so that employees can be on alert for such a person, and know what to expect if they come across them. For such an offensive layer to work well, there needs to be a clear and easy to follow procedure that an employee can start as soon as they notice suspicious behaviour. This process should result in an active pursuit of the hacker, and with the appropriate employees given forewarning of such an attacker. Otherwise every incident of contact with the social engineer will be dealt with in isolation by individual employees, and no connection will be found between incidents until it is too late.
Despite being a threat, social engineering remains a formidable threat. Even though Verizon’s Data Breach Reports over the last few years have consistently named phishing as one of the top threats companies should be aware of, it remains the most used hacking tool for data breaching, with over 90% of data breach incidents starting out with a phishing email. Once organizations start to take social engineering more seriously, and start applying proper defences instead of the bare minimum, to counteract its effectiveness, social engineering will lose its potency and become a more ineffective tool.
Aecom, (2015). “Converged Resilience™: The cybersecurity of critical infrastructure” Retrieved from http://www.aecom.com/solutions/security-resilience/
Bankvault. (2015). “Definition of the Day: Quid Pro Quo Attack.” Retrieved from https://www.bankvaultonline.com/knowledge-base/definition-of-the-day/definition-quid-pro-quo-attack/
BBC, (2004). “Passwords revealed by sweet deal,” Retrieved from http://news.bbc.co.uk/1/hi/technology/3639679.stm
Bisson, D. (2015). “5 Social Engineering Attacks To Watch Out For.” Retrieved from https://www.tripwire.com/state-of-security/security-awareness/5-social-engineering-attacks-to-watch-out-for/
Cate, Fred H., (2006). “Phishing and Countermeasures: Understanding the Increasing Problem of Electronic Identity Theft (Edited by Markus Jakobsson and Steven Myers).” Books by Maurer Faculty. 135. Retrieved from http://www.repository.law.indiana.edu/facbooks/135
Cialdini, R. B. (1993). Influence: The Psychology of Persuasion. ISBN: 8580001041766
Chapman, S. (2009). “How a man used social engineering to trick a FTSE-listed financial firm.” Retrieved from http://www.computerworlduk.com/security/how-a-man-used-social-engineering-to-trick-a-ftse-listed-financial-firm-14706/
Cranor, L. (2016). Time to rethink Mandatory Password Changes. Federal Trade Commission.
Dewey, C. (2014). “Forget ‘Celebgate.’ Hackers are gunning for the nude photos of ordinary women and underage girls.” Retrieved from https://www.washingtonpost.com/news/the-intersect/wp/2014/10/07/forget-celebgate-hackers-are-gunning-for-the-nude-photos-of-ordinary-women-and-underage-girls/?utm_term=.fa88e90864da
Europol. (2017). “European Union, Serious and Organised Crime Threat Assessment (SOCTA), Crime in the age of technology.” Retrieved from https://www.europol.europa.eu/activities-services/main-reports/serious-and-organised-crime-threat-assessment
Gollan, N. & Carew, N., (2016). “Why companies are exposed to social engineering.” Retrieved from https://www.senseofsecurity.com.au/sitecontnt/uploads/2016/04/Sense-of-Security-Whitepaper-Social-Engineering-V1.1-01Apr16.pdf
Granger, S. (2001). “Social Engineering Fundamentals, Part I: Hacker Tactics,” Symantec Retrieved from https://www.symantec.com/connect/articles/social-engineering-fundamentals-part-i-hacker-tactics.
Granger, S. (2002). “Social Engineering Fundamentals, Part II: Combat Strategies” Symantec Retrieved from https://www.symantec.com/connect/articles/social-engineering-fundamentals-part-ii-combat-strategies.
Heath, E. (2017). “How to improve phishing awareness by 300% in 18 months.” Retrieved from https://www.rsaconference.com/writable/presentations/file_upload/hum-t11-how-to-improve-phishing-awareness-by-300-percent-in-18-months.pdf
Lexihut Professional Advisory Platform. (2016). “Real Future What Happens When You Dare Expert Hackers To Hack You Episode 8.” Retrieved from https://www.youtube.com/watch?v=F78UdORll-Q
Lowenstein, G. (1994). “The Psychology of Curiosity: A Review and Reinterpretation.” Psychological Bulletin, 116, 75-98. Retrieved from http://int-des.com/wp-content/uploads/2013/12/PsychofCuriosity.pdf
NCSC, (2015). National Cyber Security Centre. The Problems with forcing Regular Password Expiry. Retrieved from https://www.ncsc.gov.uk/articles/problems-forcing-regular-password-expiry.
Ortutay, B. (2015). “Companies send fake phishing emails to test security” Retrieved from http://www.pressherald.com/2015/02/12/companies-send-fake-phishing-emails-to-test-security/
Peltier, T. R., “Social Engineering: Concepts and Solutions,” Information Systems Security, Vol 15, (5), Retrieved from http://www.tandfonline.com/doi/abs/10.1201/1086.1065898X/463220.127.116.1160901/95427.3.
Reeder, M. (2016), “Warning Over Fake HMRC and PayPal emails,” Yorkshire Post, retrieved from http://www.yorkshirepost.co.uk/news/crime/warning-over-fake-hmrc-and-paypal-emails-1-7834596.
Rusch, J. J. (2000). “The ‘Social Engineering’ of Internet Fraud.” United States
Department of Justice. Retrieved from
Skinner, B. F. (1953), “Science and Human Behaviour,” Retrieved from https://books.google.co.uk/books?id=QcbJInkd_iMC&dq.
Stasiukonis, S. (2006). “Social Engineering, The USB Way.” Retrieved from http://web.archive.org/web/20060713134051/http://www.darkreading.com/document.asp?doc_id=95556&WT.svl=column1_1
Tetri, P. & Vuorinen, J. (2013). “Dissecting Social Engineering.” Retrieved from http://www.tandfonline.com/doi/abs/10.1080/0144929X.2013.763860
Thornton, K. (2016). “5 Types of Social Engineering Attacks.” Retrieved from https://www.datto.com/blog/5-types-of-social-engineering-attacks
Verizon. (2016). “2016 Data Breach Investigations Report.” Retrieved from https://regmedia.co.uk/2016/05/12/dbir_2016.pdf
Whitman, M. E., Mattord H. J., (2016). “Management of Information Security.” Retrieved from https://books.google.co.uk/books?id=_aIZDAAAQBAJ&source=gbs_navlinks_s.
Wong, M. (2017). “Pizza over privacy? Stanford economist examines a paradox of the digital age. Stanford News. Retrieved from http://news.stanford.edu/2017/08/03/pizza-privacy-stanford-economist-examines-paradox-digital-age/.
Žukina, T. (2015). “Social engineering techniques: Vishing, quid pro quo, tailgating, baiting,” Retrieved from https://sgros-students.blogspot.co.uk/2015/11/social-engineering-techniques-vishing.html
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "Cyber Security"
Cyber security refers to technologies and practices undertaken to protect electronics systems and devices including computers, networks, smartphones, and the data they hold, from malicious damage, theft or exploitation.
Challenges to Defining Cybercrime
What is Cybercrime? At this point of time there is no commonly agreed definition of ‘Cybercrime’. The area of Cybercrime is very broad and the technical nature of the subject has made extremely di...
An Artificial Neural Network Approach for Detecting Spectrum Sensing Data Falsification Attacks
An Artificial Neural Network Approach for Detecting Spectrum Sensing Data Falsification Attacks in Cognitive Radio Networks Abstract—In spectrum sensing data falsification attacks, malicious user...
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: