News Items & Releases
Canadian Defence Foundation partners with America's The Flag & General Officers' Network Read more
Book Release: DOOMSDAY: Iran - The Clock is Ticking by James G. Zumwalt Read more
Book Release: Practicing Military Anthropology by Robert A. Rubinstein, Kerry Fosher, Clementine Fujimura Read more
Mobile Computing: Recognizing that the Biggest Challenges We Face in 2012 and Beyond is Our Behavior and Use of Mobile Devices Read more
The Canadian Defence Foundation honours and recognizes Rear Admiral (Ret.) James J. Carey of the USA for his on-going support of Canadian efforts. Read more
Meet our newest Advisory Board members, persons who represent the next generation of Canadian expertise and leadership in the field. Read more
Cybersecurity is becoming an increasing issue in our daily lives, but far too often, we are the biggest vulnerability and are not even aware of it. Read more
The Canadian Defence Foundation announces formal working relationships with domestic and international players in academia, the think tank world, and in the private sector. Read more
There is a new generation of bacteria we are not prepared for. How incentivizing antimicrobial research and development will not only protect those on the front line, but each and every once of us. Read more
Members of the Canadian Defence Foundation travel with our supporters to the United Nations in New York City. Read more
The Real Vulnerability of the Cyberworld: You and I
Adelphi, MD | Ferbuary 29, 2012
By George Platsis
In the second decade of the 21st Century, and moving onwards, the cyberworld is a reality that is inescapable from our lives and the threat it creates is the most pervasive and pernicious threat facing the United States today (Sharp, Sr., 2010). In this reality, we face a myriad of threats, many of them not even known to the everyday user of the cyberworld. For all our technological advancements and defensive measures, it can be argued that, while operating within the cyberworld, the most critical vulnerability is in fact the human element. Moreover, this vulnerability is only exasperated by the naivety of the general public and lawmakers, specifically by not taking this issue seriously or even understanding what “cybersecurity” truly is (Cyber Security: U.S., 2006).
From an IT manager’s perspective, this can be a very cumbersome and daunting reality. The internet, in essence, is a gigantic client-server system on a wide-area network (Hueneman, 2000), and in a context where virtually anybody with the appropriate hardware can access this network, a cyberattack can occur from a near limitless number or sources and even from benign error (Cyber Security: U.S., 2006). Therefore, it is not unreasonable to assume that a knowledgeable IT manager would be frustrated by the misuse of the internet, particularly if it is a function of organizational policy and individual usage.
The danger that exists is that: once inside a network, without the proper mitigating strategies, much harm can be done, both in the short- and long-term. Context is important in understanding this issue. More specifically, the original architecture of the internet was appropriate for its original intents and uses, but as time progressed, and as intents and usage changed, the necessary integrity for secure communications diminished (Hoar, 2005). Moreover, few users understand that since so many systems are built on the premise of “single point entry” that once an attacker is into the network, they may have an unprecedented amount of access to anybody and anything that is on that network (Hueneman, 2000). The threat therefore becomes even more exasperated due to the human element vulnerability and the threat becomes even more daunting when issues of interdependencies and overlaps in, and within, systems are taken into account.
From a macro sense, it is reasonable to state that few people have the understanding to implement the necessary cybersecurity measures required for an organization, ranging from the small- and medium-sized organizations, all the way up to enterprise and government systems. This reality is very much a function of policy (or lack thereof). If the policy is not drafted properly, or even enforced in a meaningful manner, no level of individual user awareness can address the vulnerability; in essence the first stage of awareness begins at the very top of the organization. To understand the magnitude of the cyber threat, data from 2010 suggests that fifteen million victims will lose more than fifty billion dollars each year (Sharp, Sr., 2010); it is reasonable to assume that this figure is even larger today. As noted in a previous exercise, the global cybercrime industry is so lucrative, with estimates putting it at over $114 billion USD annually (Serrano, 2011) making this such an unbelievable threat.
From a micro sense, the vulnerability caused by humans and internet usage is only further compounded by the fact that the “average daily user” does not truly understand the implications of their misuse (and this is not to suggest that the misuse is malicious – though it can be – but rather, the misuse is benign in nature). As of September 14, 2009, more than 10,450,000 American residents had been victimized by identity theft in 2009 alone, and that number increases by one victim each second (Sharp, Sr., 2010). To make matters even worse, the proliferation of information on how to infiltrate, and specifically take advantage of poor policy and naïve users, is readily available on the internet (Gripman, 1997). Given that corporate and government breaches are widespread and enormous, the misuse of a government or organizational computer (or any piece of hardware that connects to the internet) by an individual user makes an already difficult challenge that much more difficult.
For all this, valid concerns remain that countries, like the United States, and organizations within these countries, are not organized and prepared to counter, and respond to, cybersecurity realities (Cyber Security: U.S., 2006). Further to that, even with the usage, assistance, and scrutiny of experts, funds, and development of less vulnerable systems, security problems still continue (Hoar, 2005), which accentuate the need for further education at both the policy and user level. This issue can be illustrated by the fact that the sheer quantity of patches and systems (which are made available on a near daily basis) overwhelm even the best of IT administrators (Nojeim, 2010), showing that using technology as a crutch for security, as opposed to a tool, is not only foolish, but unbelievably dangerous.
The growing number of victims suggests that not only has the vulnerability been properly addressed, but it may not even be properly understood yet (Sharp, Sr., 2010). No level of technological security can ever provide 100% security, and, even with the most robust systems, if the user (and administrator even) does not understand, and appreciate, the nature of the threat, it will still remain the single-most important vulnerability in the cybersecurity discussion, particularly when any sort of technological fix can have unexpected side effects and take a great deal of time to implement (Nojeim, 2010).
One of the main reasons this occurs is because there is poor all-round awareness amongst users, both daily and professional users. This is something not hard to believe when in 24 American federal agencies (that should be considered as “information-sensitive”), report spending an average of $19.28 per employee on security training (GAO No. 07-837, 2007). And while this data is dated by a few years, it is not unreasonable to assume not much has changed since the report, particularly given the current economic pressures, cutbacks, and austerity measures. It is reasonable to assume that since this vulnerability is so widespread, users: do not know the security implications of their actions, do not know the personal risk they put themselves in, and do not know what sort of unnecessary strains they are putting on systems and networks.
An underlying issue for the vulnerabilities is that agencies and organizations rarely implement or enforce their own cybersecurity programs (GAO No. 07-837, 2007). In many cases, the personal use of the internet on a business-/government-network, which happens often: creates risks (because standards are not adhere to), personal information gets put at risk, and unnecessary strains on resources, such as bandwidth, occur (Sharp, Sr., 2010).
Within that context, some of the most damaging intrusions are a function of cyberexploitation. Cyberexploitation refers to the obtaining and collective of information, sometimes over a period extended time (information that would otherwise be kept confidential), and, in many cases, may go completely undetected because it works slowly and meticulously (Lin, 2010). In fact, in any type of warfare, the best type of attack is the attack that goes completely unnoticed until it is too late for the other side to counter and respond.
The problem with this is that so many times it is a human decision, not a technological one, which allows the threat, such as cyberexploitation, to come to fruition. If one chooses not to take secure measures, the vulnerability is human, not technological. If one chooses not to implement effective and efficient cybersecurity policies, the vulnerability is human, not technological. Few people ask: “am I vulnerable to cyber threats?” The reality is: yes, they are. But there is an abundant amount of evidence to suggest that most internet users are not aware of the threats, and, even if they are, they again choose to avoid the precautionary measures that could mitigate the threat. And that in essence in the crux of the vulnerability: the human choice not to take secure measures. One estimate suggest that 95% of all network intrusions could be avoided (Who Might Be, 2004) which really begs the question: why are so many intrusions happening? There are readily available techniques and procedures, particularly at the human level, that could curb the threat and reduce the vulnerability, yet, even federal agencies are not taking these measures. Some of these include (GAO No. 07-837, 2007):
- - Identify and authenticate users to prevent unauthorized access
- - Enforce the principle of at least privilege to ensure that authorized access was necessary and appropriate
- - Establish sufficient boundary protection mechanisms
- - Apply encryption to protect sensitive data on networks and portable devices
- - Log, audit, and monitor security-relevant events
- - Restrict physical access to information assets
- - Configure network devices and services to prevent unauthorized access
- - Assign incompatible duties to different individuals or groups so that no one individual can controls all aspects of the information system
- - Maintain and test continuity of operation plans
This paper would suggest that every single one of these critical issues could be easily addressed yet have not been because of human choice. It is not the technological imperfections that are costing us, it is our ignorance. And something that cannot be overemphasized is that there are not only costs associated with the immediate attack that is caused, but there are mid- and long-term costs, both tangible and intangible, such as: repair and restoration costs, which can be staggering (Gripman, 1997), to damaged reputation, brand, and image. Moreover, one of the most damaging effects of an attack is the loss of confidence by your consumer base or constituents, something, which can be argued, in many cases never becomes fully restored. When the simple opening of an e-mail can cause irreparable damage (Hoar, 2005), this alone should demonstrate what the potential costs of the human vulnerability in the cyberworld are.
To further illustrate the point that awareness is lacking, virtually every report, congressional hearing, journal article, news article, and so on, almost always lists the human element as a vulnerability and that a mitigating strategy is a function of education and awareness. The development of any cybersecurity program is a challenge in today’s “always connected” world. For example, the Department of Defense is the most prolific agency on YouTube, the White House has the most followers on Twitter and the most Facebook friends (Sharp, Sr., 2010), organizations that one may not think of having such a high profile on social media at first thought.
What this shows is that the “always connected” world very much includes the institutional level, even if it is through social media means of connection. Moreover, the rise of mobile connectivity, along with remote access, further complicates the vulnerability issues as individuals and organizations to not carry through with the necessary precautions (Who Might Be, 2004) required in a mobile environment. And while all these means of communication, such as social media and remote connectivity, improve many facets of productivity and collaboration, they are a risk because of how easily information can be made publicly (and subsequently abused) due to the misusage of the internet by the user (Sharp, Sr., 2010).
These are complicated issues, and even those tasked with the responsibility to secure a network can be overwhelmed, suggesting that this human vulnerability is not only one that affects the naïve, but also the educated. Network administration is extremely complex and so much of the technology is so new that many administrators do not (or cannot) appreciate its scope, magnitude, effect, and power (Hueneman, 2000). Policy makers, end-users, and administrators need to appreciate that many network intrusions could be avoided by using a mixture of strategies, including: prevention, detection, and response (Who Might Be, 2004).
Taking Advantage of the Human Vulnerability
A semantic attack is a computer-based attack that exploits human vulnerabilities. More specifically, it takes advantage of the way humans interact with computers or interpret messages, as opposed to taking advantage of the system vulnerabilities (Kumaragugu, Sheng, Acquisiti, Cranor, & Hong, 2010). Taking advantage of human vulnerabilities can cause a plethora of problems (which any cyberattack would cause in reality), ranging from: infecting the network with a virus, intentionally shutting down a system, disrupting supply chains, and so on (Gripman, 1997). Even experienced administrators can still make mistakes, including something so seemingly benign, such as assuming “private” means “secure” (Hueneman, 2000), something which it clearly does not.
The taking advantage of human vulnerabilities in many cases can be attributed to users and operators being tricked (or even blackmailed) into doing the bidding of others (Lin, 2010), such as downloading a malicious e-mail or giving up your own or your organization’s information through a phishing attempt. For the unsuspecting, even the simplest tricks, such as opening an e-mail that states “PLEASE TREAT AS URGENT” (Hoar, 2005) can cause a series of cascading problems to the network. For any user that has daily usage of e-mail, the improper treatment of unsolicited e-mail can end up costing more than money; it can cost data and reputation.
Behavioural issues and patterns are most often the easiest to exploit, particularly in an arena where anonymity is presumed by so many. For example, social trust and recommendation services have become wildly popular of late because it allows users to learn about the popularity and quality of products, items, and services (Feng, Liu, & Dai, 2012). These social rating systems are very convenient, but they act as a template on how to manipulate users in cyberspace. The premise behind this argument is that in cyberspace, you rarely know who you are talking to at the other end, which in essence, is a threat. By understanding the behavioural patterns of the user, and what can influence them in cyberspace, the means of infiltration become infinitely easier for the potential attacker.
So how does one take advantage of the human vulnerability? It is by creating the illusion of trust. “Social engineering” is a term that: exploits the relationships of trust, reduces situation awareness, and creates a sense of urgency (Hoar, 2005) and is a tactic often used. In related cases, victims end up following orders or directives where the other side “appears” familiar, and, in other cases, there is the promise of reward if the user proceeds by doing: x, y, z and a, b, c. For many users, it is a tempting proposition, but again, it illustrates how human psychology, behaviour, and choice, plays such a critical role in understanding the threats and vulnerabilities of cyberspace. And one should not assume that if one is “educated” or a “professional” does that equate to them having the necessary knowledge of the cyberworld. One study, for example, showed that an educated and aware group of users (those within a university setting) showed a nearly 50% failure rate, across the board, on the first day of phishing scam (Kumaraguru, et al., 2009).
It is not the hardware or the software systems, the transmission media, the local area networks, the wide area networks, the enterprise networks, or the intranets, that make the user click “open” on the e-mail they have just received; it is the choice of the individual. The assumption that somebody is curious or trusting enough to open up an e-mail, or plug in a USB jump drive into their computer (that they found in their work’s parking lot) is a valid assumption (Hoar, 2005) and something that has demonstrated positive results for the attacker, even at the most secure facilities.
As time progresses though, there is an interesting by-product that is occurring due to the proliferation of information. More and more, people are adapting to technology and making it that much more of their daily lives and in doing so, they are also learning way to circumvent standard cybersecurity practices for their own personal use, be it for something so seemingly benign (like checking Facebook messages) to something malicious (such as sending intellectual property to a competitive organization) (Gripman, 1997). The use of: proxies, VPNs, and so on, ultimately disable all the intents of the IT manager, increasing the threat spectrum. Again, why is this happening? Human choice.
Addressing the Challenge
Passwords can simultaneously be the most important, yet most vulnerable element of cybersecurity. But users far too often pick passwords that are atrociously easy to guess or easy to hack. It is estimated that over half of the passwords that are in use are said to be the first names of spouses and children, birthday and anniversary dates, or the names of super-heroes (Gripman, 1997). In some cases, vendor-issued default passwords are not even changed (GAO No. 07-837, 2007). Within that context, it is hard to argue that the human element is not the most critical vulnerability.
At the micro level, this paper has tried to articulate that the user is in many cases naïve. Education and awareness tools work. In that same study already mentioned, where a near 50% failure rate occurred on the first day, awareness tools and education showed that the end of training session, the users acted in a much more prudent and secure way. Moreover, they even retained the information and knowledge past a 28 day period (Kumaraguru, et al., 2009). It is something that cannot be escaped: virtually every major report recommends that training and awareness are critical elements to success. And this training is not limited to simply the end user, but those who are responsible for the policy level decisions and for those tasked with enterprise-wide security. Security experts require continual training because the threat spectrum is not static. Rather, it is amorphous, constantly changing at an accelerating rate.
Executive leadership must understand the implication of the vulnerability. It is one of the biggest challenges in the overall effort; the reality is that in both the private and public sectors, there is little appreciation and awareness for the subject matter (Examing The Vulnerability, 2000). Moreover, there needs to be an emphasis on ensuring those already responsible for cybersecurity have state-of-the-art training (Examing The Vulnerability, 2000) to further their capabilities and stay up-to-date as to what the new threats are. Only with such a departure point can minimizing the human vulnerability be minimized. So many of the combative elements begin at the policy level, such as:
- - Ensuring only authorized individuals have granted access (GAO No. 07-837, 2007)
- - Encouraging the use of passwords that are not easy to guess, like pronounceable nonsense words, or works with numbers and special characters (Gripman, 1997)
- - Passwords changed at frequent intervals (Gripman, 1997)
- - Engrain the habit of memorizing passwords as opposed to writing them down (Gripman, 1997)
- - Encourage the recruitment of new expertise (Examing The Vulnerability, 2000)
- - Implement organization-wide security policies, procedures, and programs (GAO No. 07-837, 2007)
- - Conduct periodic assessments of risk and vulnerabilities (GAO No. 07-837, 2007)
- - Conduct surprise tests of the system
Disciplinary action can also be used, though how it is administered is always a question of huge debate. Of course, there is the practice of “blacklisting” as a punitive measure if a user misuses their system, but unfortunately, this practice has little-to-no effect. In these cases, the user has simply had their credentials restricted, but seldom know why, which means they vulnerability still exists, except the threat may be different this time.
One policy change that would have significant impact on reducing the impact of the human vulnerability would be the practice of “whitelisting”, but this concept comes with significant pushback, particularly from senior management and executive leadership. Whitelisting is a concept of granting or denying access rights and permission that is very similar to what is called “least privilege.” The principle states that users are only granted access to programs and files that they need to do their work (GAO No. 07-837, 2007). Unfortunately, this is a poorly performed practice, and in many cases, done in a shoddy way, particularly if the IT manager is not sufficiently trained. To have an effective whitelisting policy, an extremely knowledgeable IT manager is required, and in today’s landscape, demand for IT managers is high, but supply is low.
Continuing, there is a very obvious upside to whitelisting: it restricts all usage and only lets out usage that the user absolutely needs to perform their task; no more no less. But there comes the pushback from users (particularly executives) that say: “I need access to the internet to do my job, the intranet is not enough” or “I need access to my Twitter account to do my job.” This very quickly becomes a muddy battle, particularly when IT crosses paths with leadership. It takes a very brave IT manager to say, “Ms. CEO, I am sorry, but you really do not need all those applications on your company iPhone. In fact, they should be blocked because they pose a security threat to the company.”
While this would very much be a paradigm shift for many organizations, the integration or creation of the Chief Information Officer (or equivalent) into the business/strategy issues of the organization would be a very beneficial start. In this capacity, the CIO could immediately address the security vulnerabilities, but still have the critical communication channels to executive leadership open and fully understand what sort of cyber access is required, both at the strategic and operational level.
Another technique to combat this issue is to work with your supply chain. In many cases, their vulnerabilities are your vulnerabilities and your vulnerabilities are their vulnerabilities. It is a bit of a paradox here, but many of these vulnerabilities in this case may be of a technical nature, but the fact that you are not collaborating with your supply chain is actually a human vulnerability, because again, it is a choice not to address the issue.
For all that though, combating the cyber vulnerabilities, in a holistic manner, requires multiple strategies that generally fall into three categories (Kumaragugu, Sheng, Acquisiti, Cranor, & Hong, 2010):
- - Eliminating the threat
- - Warning users about the threat
- - Training users about the threats
This paper suggests that while all three are combative techniques in addressing cyber threats, the vulnerability that still touches on all issues is the human element. To eliminate the threat, an organization needs highly-trained individuals; to warn the users about the threat, those tasked with the responsibility needs to have the appropriate awareness of the cyber issues; and training the users is a function of education that ranges from every day use to the implications of poor policy.
Given that the threat is not properly understood, we face a reality not all too dissimilar from a pre-9/11 world. There were a handful of people that recognized the threats and vulnerabilities, yet most did not act on the advice of these people. The same can be said for Hurricane Katrina. The reality is that many of the vulnerabilities exposed during a disaster are not acted on until the post-disaster phase. In the cyberworld, it may require a catastrophic cyber incident to force change (Sharp, Sr., 2010).
Unfortunately, one of the compounding problems that we have is that there are not generally accepted network security standards (Gripman, 1997). If these existed, it would actually help reduce the risk level because of knowledge transfer and operational familiarity, but they do not exist. Therefore, the hardening of targets is left to the individual or the group. There is no standard, there is no standard, and in the rare case where one exists, it is limited to a small group. Addressing the vulnerabilities, be it through working with your supply chain, creating a better design, or education and awareness (Chabinsky, 2010) are all needed. Moreover, one must understand that absolute protection is not only unachievable, but in many cases, it is not required. With proper education strategies across the board, and with competent administrators, even the most malicious attacks (which cause significant damage) can be recovered from, so long as the recovery infrastructure and knowledge is already in place and ready to be activated at a moment’s notice.
In summary, an organization must commit itself to a security policy that addresses the vulnerability caused by human usage of the internet. No level of technology will be able to stop an attack if the user is uneducated and constantly circumvents (unknowingly) the security protocols in place designed to protect the network. Within that context, and identifying the human element as the single most critical vulnerability along the network, if individual users and institutional users alike do not choose to have the proper level of awareness and education, the threat will always be there.
To close with an analogy: if somebody has a driver’s license, and only knows the basics of a car’s operation, such as accelerator and steering wheel, the driver may be able to get themselves from Point A to Point B easily. But if they do not know how to use the brakes or the signal lights or willing choose not to follow traffic signs, not only are they a danger to themselves, but everybody else on the road; the internet is no different.
1) Chabinsky, S. R. (2010). Cybersecurity Strategy: A Primer for Policy Makers and Those on the Front Line. Journal of National Security Law & Policy, 4(1), 27.
2) Cyber Security: U.S. Vulnerability and Preparedness: Hearing Before the Committee on Science, House of Representatives, 109th Cong. (2005).
3) Department of Defense. (2011). Department of Defense for Operating in Cyberspace. Washington, DC: Department of Defense. Retrieved February 16, 2012, from http://www.defense.gov/news/d20110714cyber.pdf
4) Feng, Q., Liu, L., & Dai, Y. (2012, January). Vulnerabilities and Countermeasures in Context-Aware Social Rating Services. ACM Transactions on Internet Technology, 11(3), 11.
5) Examing The Vulnerability of the U.S. Systems to Cyber Attack, Focusing on the Administration's National Plan for Information Systems Protection and its Implications Regarding Privacy: Hearing Before the Subcommittee on Technology, Terrorism, and Government Information of the Committee on the Judiciary United States Senate, 106th Cong. (2000).
6) Government Accountability Office Rep. No. 07-837 (2007).
7) Gripman, D. L. (1997). Comments: The Doors Are Locked But The Thieves And Vandals Are Still Getting In: A Proposal in Tort to Alleviate Corporate America's Cyber-Crime Problem. The John Marshall Journal of Computer & Information Law, 16, 167.
8) Hoar, S. B. (2005). Trends in Cybercrime: The Dark Side of the Internet. Criminal Justice, 20, 4.
9) Hueneman, D. (2000). Comment: Privacy On Federal Civilian Computer Networks: A Fourth Ammendment Analysis of the Federal Intrusion Detection Network. The John Marshall Journal of Computer & Information Law, 18, 1049.
10) Kumaragugu, P., Sheng, S., Acquisiti, A., Cranor, L. F., & Hong, J. (2010, May). Teaching Johnny Not to Fall for Phish. ACM Transactions on Internet Technology, 10(2), 7.
11) Kumaraguru, P., Cranshaw, J., Acquisti, A., Cranor, L., Hong, J., Blair, M. A., & Pham, T. (2009). School of Phish: A Real-World Evaluation of Anti-Phishing Training. Symposium on Usable Privacy and Security (SOUPS). Mountain View, CA: Carnegie Mellon University.
12) Lin, H. S. (2010). Offensive Cyber Operations and the Use of Force. Journal of National Security Law & Policy, 4(1), 63.
13) Nojeim, G. T. (2010). Cybersecurity and Freedom on the Internet. Journal of National Security Law & Policy, 4(1), 119.
14) Serrano, A. F. (2011, September 14). Cyber Crime Pays: A $114 Billion Industry. Retrieved February 15, 2012, from The Fiscal Times: http://www.thefiscaltimes.com/Articles/2011/09/14/Cyber-Crime-Pays-A-114-Billion-Industry.aspx#page1
15) Sharp, Sr., W. G. (2010). The Past, Present, and Future of Cybersecurity. Journal of National Security Law & Policy, 4(1), 1.
16) Who Might Be Lurking at Your Cyber Front Door? Is Your System Really Secure? Strategies and Technololgies to Prevent, Detect and Respond to the Growing Threat of Network Vulnerabilities: Hearing Before the Subcommittee on Technology, Inforamtion Policy, Intergovernmental Relationships and the Census of the Committee on Government Reform, House of Representatives, 108th Cong. (2004).