It is important to know about hacking because these skills, under controlled circumstances and with the proper authority can be used to determine system vulnerabilities by taking actions a hacker would do. It is also important to understand this topic to better be able to counter the attempts of others to break into or misuse systems. Those protecting systems must be able to think like the hacker to anticipate the moves they will make. This includes moves made during an attack but focuses on anticipating what hackers will do before an attack even happens. This will allow security professionals to lay down the proper controls well in advance of an attack so that the attack will be thwarted or significantly hindered so that the attacker does not continue to pursue the target or is caught while attempting to break in. Arce and McGraw, authors of Why Attacking Systems Is a Good Idea state (p. 17), “the majority opinion is that the only way to properly defend a system against attack is to understand what attacks look like as deeply and realistically as possible” (2004). This document will cover the techniques used by penetration testers and hackers to discover system vulnerabilities including social engineering, spoofing, fingerprinting, fuzzing, footprinting, and scanning.
I. Social Aspects of Hacking
Hacking is more than a purely technical concept and more than the sensationalist concept media has promoted. The original term hacker (Erickson) was used to refer to a person who found a creative new use for equipment (2003). Eric Raymond, author of “The Jargon File”, a compilation of hacker terms defines a hacker as “a software or hardware enthusiast who enjoys exploring the limits of code or machine” (McFedries, 2004). Many creative minds over the years have been dubbed or have called themselves hackers. Additionally, a new culture has developed with specific ideals surrounding hacking. Adhering somewhat to the definition of hacking, the culture strives to learn more about equipment and technology and find new uses and creative ways of doing things with this technology. They wish to then distribute this knowledge to anyone who desires it. The culture might seem similar to academia but the quest for knowledge is where the similarities stop and some very large distinctions begin. One tenet of this culture is the freedom of information (Erickson). This is a belief that all information about everything no matter how much it cost to develop or how sensitive it is should be made freely and widely available to anyone who desires it (p. 2). This, while seemingly altruistic, gives a person ascribing to this philosophy the belief that they can do whatever is necessary to obtain and distribute such information, so while the concept sounds very good, freedom of information sometimes involves illegal activities. Another distinction is in anonymity. In academia those who contribute to the common body of knowledge are well known. Hackers who contribute to the community often do not know the true identities of others and do not make their identity known. On a positive side, this (2003) allows for a culture that embraces individuals who have likeminded pursuits no matter what their race, age, gender, or religion is. Since that data is unknown and not deemed necessary it is not a factor in whether or not a member can participate (Erickson, p 2). However, anonymity can also reduce some of the social barriers to deviance.
A. Hacker Classifications
The question of what a hacker is still remains. There have been a variety of labels used over the years. Some (Erickson, 2003) attempted to restore the hacker name to its former state as simply one who wishes to find creative uses for technology by coining the term cracker. A cracker is defined as a computer deviant who uses technology maliciously (p. 3). As of this writing, the term has not been well adopted and the term hacker is still seen in a negative light. Around the same time, (2003) the term script kiddie was created to describe someone who uses the technological tools and techniques others have developed to commit unlawful acts (Erickson, p. 3). Unlike the term cracker, this term is commonly used in industry vocabulary.
Other terms have been developed to describe different hacking methods. In order to differentiate hacking mediums such as wired-based, wireless, and telephone hacking the term whacker (McFedries) was created to describe hackers using primarily wireless technology, bluejacker to describe the hacking of Bluetooth devices (2004) and phreaks (Chirillo) to describe hackers of telephone networks (p. 84, 2001).
Some terms have come about to represent those who break into systems in order to improve systems. These terms (McFedries) include white-hat, ethical hacker, and samurai (2004) and the actions taken by these individual are elements of what is called penetration testing (Bishop, 2007) or red teaming (Arce and McGraw, 2004). The terms red team and blue team are used in penetration tests involving multiple attackers and defenders (White and Conklin). Red teams are the attackers and blue teams are the defenders in these scenarios (2004). Three other terms are used to describe those who attack systems. These terms are black hat, gray hat, and white hat. Bratus describes black hats as those who attack systems for selfish reasons such as money or prestige, white hats as those who attack systems while adhering to laws and standards of ethical behavior, and gray hats as those who attack systems to warn others about vulnerabilities but do so sometimes by violating laws (2007). Even some in academia (p. 72) have been labeled gray hats when the publication of their findings resulted in legal action from companies (Bratus, 2007). This document will utilize the term penetration tester or white hat to refer to those who attack a system in order to improve or test it and penetration testing as the act of attacking the system for the purpose of improving or testing it. The terms hacker and attacker will be used to represent those with malicious intent.
B. How hackers learn
There is a difference (Bratus) in the way hackers think and learn and that of software developers. This process begins when they first learn about networking and continues while they implement solutions to problems and conduct problem solving and troubleshooting of issues (2007).
Those who write software are taught to create solutions based on the previous work of others who have already found a way to do something. Developers are also not taught to understand the code they use to develop software (Bratus, p. 73). This would violate the black box property where a developer does not have to understand how a module works; only how to use it. Developers (Bratus) are also taught to concentrate their efforts on the most common of scenarios when designing security for software in order for their code to have the most effect but in doing so they marginalize other possible scenarios (2007).
Hackers on the other hand concentrate on the rare potential cases and develop software to exploit those cases. Hackers understand the building blocks developers use such as API (Application Programming Interfaces), DLL (Dynamic Link Libraries), and programming modules and the vulnerabilities each one brings to programs that use it (Bratus). Hackers learn early on the underlying technical details of computer software and systems including packet and protocol analysis, manipulating data at various levels of communication, and malicious input (p.74). Hackers read technical documents (Matthew) such as RFC (Request for Comment) documents and understand the underlying elements and the technical details on how a system operates (p. 125, 2003). This knowledge gives them power. Hackers often deviate from standards and do not use common APIs, DLLs, and programming modules. Instead, (Bratus) they develop their own programs that give them more control on the full system state (p. 73).
Hackers also learn a lot from hacking case studies where hackers will post their successful attacks or new software development projects online. These posts are very technical and well documented. Programming case studies, on the other hand, are often light in comparison with many of the details and technical information left out either for simplicity sake or to protect confidential work products. However, the open source initiatives offer quite a bit of source code for developers to look at if they desire to. This gives hackers an advantage when building software or designing an attack as they have practical information on creating that software or attack (Bratus, 2007).
C. Thinking Exploitation (Thinking about how to hack)
Attackers will not use the same method all the time and they will always be searching for new attack venues. So too must penetration testers be continually learning and trying new things. Software engineers often think of more complex solutions to security issues but that is not always the correct course of action when trying to think of creative solutions. One method does not come from a more complex or technical method of attacking the system but rather a simpler method using new techniques. Developers (Perrone) are encouraged to think simpler so that they will try things that they would not try before when they are thinking of the complex solutions (2007). Creative ideas aren’t always technically superior. They are sometimes only something new and unthought-of.
Hackers desire to know where their efforts will produce the greatest results and defenders want to know where their defenses will protect them best. Gordon states that in order to combat cyber attacks defenders must be aware of the motivations and goals of attackers so that security controls and spending will be placed in the right areas; mainly where the attackers will focus their efforts (2006).
D. Social Engineering
Social engineering is often a forgotten tool of hackers. Social engineering involves asking for information using subterfuge and lies. The human component of the organization will always be the weakest link (Mitnick and Simon, 2002). Attackers may pose as a user in need of assistance (Chirillo) or as a technician trying to solve a problem. Both can be very successful methods of obtaining information (2001).[/vc_column_text][vc_separator][vc_column_text]
II. Software and Operating System Exploits
Buffer overflow exploits are the most widely used methods of altering the purposes of a program for the hacker’s own use (Erickson, p. 14, 2003) and they are the most common exploit listed on the Computer Emergency Response Team (CERT) advisory (Pincus, 2004). These attacks take advantage of an opportunity to run malicious programs on a server when developers of software do not properly check the data their programs receive for valid input. Code that runs in this way is very dangerous because it runs using the credentials of the account the exploited program uses which is generally a privileged account with access to many parts of the system (p. 15). Most writing on buffer overflows focuses on a concept called stack smashing (Pincus) where an attacker inserts a pointer or return address into the stack. The stack is the place in memory where a thread is allowed to store the data it is working on. Adding a return address to the stack will cause the program to go to another place in memory where the attacker’s command or some system command lies. This command will be unintentionally executed by the program.
Buffer overflow exploits have matured. Since 2000 (Arce and McGraw) more advanced buffer overflow exploits have been found that make stack smashing almost obsolete (p. 18, 2004). Software engineers have been trained on how to counter stack smashing and programs such as Crispin Cowan’s Stackguard monitor the return address field to detect stack smashing (Arce and McGraw, 2004). In response, hackers have developed several highly refined methods to create buffer overflows. These methods such as arc injection, heap smashing, trampolining and pointer subterfuge are not protected against as well as stack smashing buffer overflow exploits (p. 20, 2004). Arc injection is an exploit used to change which elements of a program run or the order in which they run. Arc injection can be used (Seacord) to take out validation checks or security sub routines such as authentication or encryption methods (Sec 2.7, 2005). Another buffer overflow attack is heap smashing. Heap smashing (Pincus) differs from stack smashing in that heap smashing modifies a return address in dynamic memory in the heap tree-base structure instead of within the program stack. Trampolining is a variation of the stack smashing exploit. Trampolining does not directly reference the code the attacker wants to execute. Rather, the attacker will reference code that lies near the address space of the code they wish to execute so that when the other code executes it will execute the attacker’s code as well (Pincus, p. 22, 2004). Potter and McGraw state: “two-stage buffer-overflow attacks using trampolines were once the domain of software scientists, but now appear in zero-day exploits” (2004). This (Arce and McGraw) is one way attackers have learned to get around stack monitoring programs such as Stackguard (2004).
Another varied buffer overflow attack is pointer subterfuge. This method works by altering pointers to reference another location in memory causing the program to run the hacker’s code rather than its own. Some types (Pincus) of pointer subterfuge include function-pointer clobbering which alters the reference to function, data-pointer modification which alters data that the program will use for later processing, exception handler hijacking which alters the reference for error handling subroutines to point to other code, and Virtual Pointer (VPTR) smashing which alters the C++ program Virtual Table (VTBL) pointer to a different VTBL created by the hacker (2004). A function is a block of code that accepts arguments and produces a result back to a program. It differs from stack smashing in that stack smashing alters the return address and pointer subterfuge alters the pointer variable within a function (2004).[/vc_column_text][vc_separator][vc_column_text]
III. Hacking Concepts
Attacks have changed over the years have moved from being primarily network focused to more application focused (Curphey and Araujo, 2006). Attackers would try to discover ways to breach a network but now the focus is on obtaining access to applications. Applications host and interpret information so they are prime targets for hackers who wish to obtain this information. Web applications (Curphey and Araujo) are particularly popular for hackers to attack as they are accessible.
The hacker will have many tools in their possession and in order to achieve their goal they will need to go through many steps beginning with discovery and ending with the escalation of privileges. Some actions hackers take include hiding their activities, fuzzing, denial of service, spoofing, password cracking, and distributing malware.
In order for a hack to be successful attackers must gather information on their targets. They do this through various discovery methods such as dumpster diving, domain querying, fingerprinting, ping sweeps, and port scanning. The simplest and sometimes overlooked method of these is dumpster diving (Matthew, 2003). Dumpster diving involves looking for confidential information in the trash of an organization. It is given the name because dumpster divers will rummage through an organization’s garbage looking for documents that contain useful information.
Domain querying is not hard to do and it in and of itself is not an attack but it is a part of gathering information about a target. Domain querying (Chirillo) uses a who-is command to determine the owner of a domain name. Who-is registrations (Matthew, p. 71) are public records on domain names that are maintained by region with ARIN for North America, APNIC for the Asia Pacific region, LACNIC for Southern and Central America and the Caribbean, and RIPE NNC for Europe and Africa (2003). This (Chirillo, p. 155) will then allow the attacker to visit the web site or contact someone at the organization to gather more information that could be useful in attacking the system such as security components that are in place, contact information for social engineering, potential passwords, or IP addresses (2001). Some more ambitious hackers (Matthew, p. 200) may even attempt a DNS (Domain Naming Service) zone transfer which, if successful, will provide the hacker with information on all hosts on the network, the services which are advertized, the IP addresses that DNS servers respond to, and more (2003)
The next discovery method is fingerprinting. Fingerprinting is the process of determining the target’s operating system, web server, or application running on a server by analyzing the responses given to standard input. Techniques as simple as an ICMP (Internet Control Message Protocol) pings (Matthew) can reveal whether an operating system is UNIX, Linux, or Microsoft based on the gap in the ICMP sequence number field (p. 111, 2003). Other methods of fingerprinting (p. 161) include the TTL (Time to Live) setting on packets sent by the system, the TCP/IP window size, whether or not the don’t fragment bit is set, and the TOS (Type of Service) parameter (Matthew, 2003).
Another method of discovery is the ping sweep. Conducting a ping sweep can provide an attacker with information on which hosts are active on the network. The ping sweep when combined with DNS queries can reveal the hostname as well which can go a long way in determining the role of the server, printer, or other network node (Matthew, p. 112).
Once an attacker identifies potential targets through a ping sweep, they can find out if those machines have specific ports open which could give the hacker an entry point. When an attacker finds an open port (Erickson) they can potentially use that port to execute an attack because each port has a service associated with it that waits for a connection (p. 162). Standard port scanning attempts to connect to every possible Transmission Control Protocol (TCP) port. This method logs the IP address of the scanner so standard port scanning is not recommended unless conducting a spoofing decoy scan where a spoofed IP address is used instead of the attacker’s IP for the scan (p.163). Standard port scanning attempts without spoofing will result in the location of the attacker being given to the target organization. Stealthier port scanning methods include the stealth SYN scan, FIN scan, X-mas scan, and Null scan. These scans only open a connection half-way so that no traceable connection is opened. Rather, (Erickson) open ports are determined based on the process of elimination. When ports are not open the attacker receives a RST message so they know which ports are closed and so the remaining ports must be open (2003). The hacker will decide which port scanning method to use (Matthew) based on their discovery methods thus far (p. 127). Characteristics of the network architecture and the presence of an IDS (Intrusion Detection System) or logging services will help the attacker choose a scanning method that will be able to slip by such security products (2003).
Another important hacking concept is that of hiding data. The work of hackers and the programs they employ such as viruses, trojans, and rootkits need to be secret if they are to be of value. This has been the case (Ford and Allen) even with some of the earliest viruses which maintained a secret presence on computers through stealth (2007). Ford and Allen state in their article How Not to be Seen, “if an unwanted computer program can’t be seen, it can’t be eliminated” (2007). This is very true. Once the activities of a hacker are discovered the hacker’s attempts could be foiled so they hide their programs.
The simplest hiding technique (Ford and Allen) is to use the operating system controls to hide a file. Hiding files through the operating system does not require any hacking skill as the feature is already built into the software. The attacker merely instructs the operating system to make the file hidden. These hiding techniques (p. 67) are called passive techniques because they only hide from a passive scan or a scan where someone is not looking for hidden data (2007). Programs or files hidden using a passive method will be easily discovered by a savvy user or off-the-shelf anti-malware programs. The converse to passive stealth (Ford and Allen) is active stealth where the attacker changes the operations of the computer in order to hide files (2007). Disk Operating System (DOS) viruses worked on a simpler operating system. DOS allowed programs to manage quite a bit more of the system’s hardware resources and to directly interface with those resources than current operating systems such as Microsoft Windows do. They would hide viruses using a technique called hooking where the virus would send the operating system the code INT13h which represents an unused sector of the disk when asked for the contents of its location on the disk (Ford and Allen, p. 68). As operating systems became more refined, the operating system took more control and gave less to the applications running within the operating system. In order to remain stealthy the art of hiding files was also refined. Operating systems took some control from programs and separated different processes from themselves in a mode called user mode and the operating system into another mode called kernel mode (p. 68). Applications running in user mode had limited the knowledge of the system and of other services and applications installed on the system. Hackers developed methods to implement stealth in both user and kernel mode but user mode stealth could be detected easily (Ford and Allen). In order for programs to hide effectively in this environment they had to move from user mode to kernel mode (2007). There are a variety of methods hackers now use to operate stealthily in kernel mode and other technologies that have been developed to attempt to find those applications.
Hackers initiating attacks will use stealth not only to hide their programs but also to obfuscate the source of their actions. Some methods include the use proxies or anonymizers (Matthew) which are services that take out information from an Internet session that could be used to determine location and system information (p. 167) on a host using the anonymizer service (2003). This makes it more difficult for security administrators to track down the source of attacks and allows hackers to continue their work without being prosecuted for crimes.
Another form of stealth is spoofing (Fadia). Spoofing is where an attacker pretends to be someone else. The type of spoofing depends on what component of the communication the attacker is modifying. IP spoofing, for example, is when an attacker makes attacks appear to come from a different IP address (p. 187). MAC spoofing makes attacks appear to come from a different MAC address. Some spoofing programs include (Chirillo) Chaos Spoof, Command IP Spoofer, DC_is, Dr Spewfy, Fake IP, Wingate Spoofing, Domain Wnspoof, and X-Ientd (p. 855, 2001).
It is always best to automate tasks that are routine. Fuzzing is one example of automation of an attack. Fuzzing, otherwise termed fault injection, (Oehlert) is the automation of inserting random, lengthy, or harmful strings or code fragments into input controls within a program. Fuzzing is a method hackers use to probe for possible areas where invalid input can lead to access, escalated privileges, or program malfunctions (2005). A variety of hacker and commercial tools have been developed to implement fuzzing including Microsoft’s FxCop (Curphey and Araujo, 2006). Fuzzing allows hackers to try many possible combinations to exploit a system. Once fuzzing identifies a vulnerability (Oehlert) the hacker can then determine the exact input they wish to send to the program in order for it to carry out their desire (2005).
D. DoS (Denial of Service)
Denial of Service (DoS) attacks attempt to compromise the availability of systems. Early DoS attacks (Erickson) took the form of the Ping of Death (PoD) in which Internet Control Message Protocol (ICMP) messages are sent to the target with a data payload larger than 65,536 bytes (p. 160). ICMP DoS attacks including the PoD have been protected against with operating system patches and patches to infrastructure devices but hackers have developed newer methods to disrupt the availability of systems. Later developed DoS exploits include the teardrop attack and flooding attacks. The teardrop attack works by setting the beginning and ending segments of fragmented packets to overlap and crash a machine when the packets are put back together (p. 161). Flooding attacks do not try to crash the machine. Instead flooding attacks send a machine so much data that it is too busy processing bogus data to respond to legitimate data (2003). Some flooding attacks include SYN flooding (Erickson) where attackers create many connection requests that sit in a computer’s queue until the queue is full. The attacker never completes the connection request sequence so the queue stays full until the connection request times out. An older DoS method (Matthew) is the NetBIOS DoS attack where fake name release and name conflict messages are sent to the WINS (Windows Internet Naming Service) computer to remove a machine from the directory (p. 247). Ping flooding (Erickson) sends so many echo messages or echo responses to a machine that it cannot process other traffic (p. 162). DoS attacks are sometimes initiated purely to attack the availability of systems and other times the attacks are used to take down a resource that could be a hindrance to future actions of the attacker such as logging servers or monitoring devices.
E. Password Cracking
Depending on the information gathered by the hacker, some might simply attempt to guess the password. This is especially common when social engineering has been used on individuals at the target organization but it requires a great amount of manual interaction with the system (Matthew). Manual password guessing is also used when passwords are discovered on the network on in monitored communications.
Automated guessing techniques are more commonly used. These techniques are broken down into what is called brute-force password cracking where all possible combinations of letters and numbers are used (p. 235) and dictionary attacks where words in the dictionary are used and often combined with standard letters and numbers to form possible passwords (p. 234). Some automated password cracking programs include LophtCrack, NetBIOS Auditing Tool, Jack the Ripper, KerbCrack, and Legion (p. 217). Each of these programs will try passwords until a correct match is found. Some work with both usernames and passwords and will continue after finding a username and password match. The hacker can stop the password guessing program when they have a sufficient number of usernames and passwords.
Malware is any type of program that takes nefarious actions on a system. Malware types include viruses, Trojan horses, worms, spyware, and rootkits. The worst type of malware is the rootkit. Rootkits, once installed, (Arce and McGraw) are a way for attackers to take control of a machine at will without the user of the machine knowing (2004). Matthew states that rootkits are often used to allow hackers to use a computer as the source for attacks against other systems because the stealth the rootkit provides allows the attacker to load many useful programs onto the machine without the user knowing of them (p. 259).
Viruses have been used by hackers for a long time. Viruses can be used to reduce the availability of systems or to gather information. Viruses attach themselves to programs to propagate and do damage. They execute when the program they are attached to is loaded into memory. Worms are used by hackers for similar purposes to viruses but worms can propagate by a variety of means and they do not require a host program to operate.
Trojan horses (Matthew, 2003) are another common malware tool hackers use to gain access to a machine. Trojan horses travel in a program that appears to be benign but they perform additional unwanted actions that users have no knowledge of in addition to some other seemingly useful function (p. 304). Trojan horses give an attacker the same credentials as the user who installed the trojan horse (p. 305). This can often be the first step an attacker has into a system. They will later use privilege escalation to gain additional control over the machine. Some trojan infectors include (Chirillo) BoFreeze, Coma, GirlFriend, Jammer, NetBus, Masters Paradise Loader, Prosiac, SubSeven, and Back Orifice (p. 856, 2001).
Another form of malware is spyware. Hackers can use spyware (Matthew) to discover what actions users take. This can give them information on which user accounts would be most useful to take over for their malicious activities. Some spyware programs used by hackers include Spectator Pro and eBlaster (p. 254).
G. Escalating Privileges
Hackers are not satisfied with simply accessing a machine. Hackers want to have administrative credentials so that they can do whatever they want on a machine (Matthew). In order to do this it is necessary to escalate standard or limited privileges to administrative privileges. Tools such as ERunAs2x for Windows 2000 or GetAdmin will do just that for a hacker (p. 228, 2003). Once hackers escalate their privileges they will have full control over the machine or possibly many machines. If an attacker gains administrative credentials on a domain controller they can use those privileges to change passwords, create new accounts, backdoors, and many other malicious activities that could allow the hacker to do a lot of damage.
Keyloggers are sometimes used (Matthew, 2003) to combine password cracking with privilege escalation. Once an attacker has access to a machine they can drop a keylogger onto the machine to gather usernames and passwords that are typed on that machine. If an administrator or someone with higher credentials than the attacker logs on after the keylogger has been put in place, the username and password will be logged for the attacker to retrieve at a later date (p. 253). One keylogger program used by hackers is IKS (Invisible Keylogger). All usernames and passwords are logged to a file that the attacker can easily obtain (p. 258).[/vc_column_text][vc_separator][vc_column_text]
IV. Penetration Testing
Penetration testing is the process of attacking systems to determine the overall security level of a system and the weaknesses of the system. Penetration testing is vital if an organization is to have an understanding of their security posture and the risks involved with deploying a system or application. Arce and McGraw assert that it is vital to conduct penetration testing in the security design and implementation stages (2004). However, organizations must make a concerted effort when conducting penetration testing. Some confuse penetration testing with bug testing or functional testing. The process of penetraton testing (Arkin, Stender, and McGraw) is different from functional testing as functional testing determines whether the software can perform an action and penetration testing determines whether the software cannot perform an action (2005). Companies must be thorough when performing the test and when fixing the problems found. Some companies (Arce and McGraw) have been known to conduct a nominal test or simply run off the shelf tools that discover only the most basic flaws which are then fixed when a report is given while more complex flaws still exist (2004). This method may make management and software engineers feel good but it will not improve the security posture of the software or system much.
Penetration testing (Curphey and Araujo) should focus on all entry points and not just the customer-facing interface (2006). Systems are very complex and there can be many entry points. The system is only as secure or insecure as its most insecure element. Penetration testing, however, should not focus on the “low-hanging fruit” (Arkin, Stender, and McGraw) by only exploiting the easiest elements (2005). It is important for companies to know where they are vulnerable and they must have this knowledge from multiple levels. In order to provide this type of information, penetration testers (Arkin, Stender, and McGraw) should test each component of the system individually and then together (2005) to find possible flaws in the components that would not appear if the system were only tested as a whole. Also, if the results of the penetration tests show a series of similar problems, more general controls should be developed to prevent similar problems from occurring in other code developed or the same code later in the project. Penetration testers, therefore, must analyze the system as deeply as possible to give the best results.
Penetration testing includes some additional terminology. The DoD (Department of Defense) refers to red teams (White and Conklin) as the attackers and blue teams as the defenders and they frequently conduct red team exercises to evaluate the ability of security administrators to detect attacks and successfully defend the facility (p. 34). The DoD not only uses red teaming to test networks but all elements of security including physical security and human resource vulnerabilities in addition to technical vulnerabilities (p. 35, 2004).
B. Lifecycle Penetration Testing
Penetration testing of applications (Arkin, Stender, and McGraw) should be conducted throughout the Software Development Life Cycle (SDLC) (2005). There are different tests that can be performed at specific stages of the SDLC such as code review and preliminary risk analysis followed by an analysis of complied binaries and deployment configurations in later stages. Most often organizations wait too long to perform penetration testing such as in the Capers Jones survey (Curphey and Araujo) that states the cost of $16,000 to correct a bug in deployment would cost only $1 to fix in development (p. 35, 2006). However, it is also possible to test too early and waste valuable resources. White and Conklin state: “If the organization under test has not sufficiently grown in its security development, a technical penetration test, especially a red teaming event, always results in the red team successful penetrating the site and won’t lead to any significant improvements” (2004). Timing is clearly important in penetration testing. The right activities should be preformed at the right stages when the organization is in a position to benefit from the testing.
C. Preparing for Penetration Testing
The first step in preparing for a penetration test is to determine what the goal of the test is. White and Conklin cite (p. 34) discovering vulnerabilities or giving security administrators an opportunity to learn how to respond to an attack as two possible goals of a test (2004). It is also important to know the scope. For example, whether the test is only on certain systems or applications, a specific site, and whether the test covers electronic systems only or the entire facility (White and Conklin, 2004). Next, it is very important to define the parameters of a penetration test (Bishop) such as the meaning of breaking into the system (2007). Breaking into the system could mean finding a vulnerability or inserting malicious code, or taking down the entire system. For environments where 24×7 availability is necessary there may be limitations on what breaking into the system means and what actions penetration testers can perform.
When preparing for penetration testing it is important to construct a set of tests that cover a broad range of possible vulnerabilities. Curphey and Araujo defined a framework consisting of eight categories in which test cases should fall (p. 34). Penetration testers should design tests that fit into each of these categories. The categories are as follows: configuration management, authentication, authorization, data protection, user and session management, data validation, error handling and exception management, and auditing and event logging (2006). During the preparation stage penetration testers can choose to use a variety of tools. Tools can be very helpful (Arkin, Stender, and McGraw) because the output from tools can be used in trend analysis or for computing baselines and metrics on the security state of the software. However, automated tools are not good enough alone to provide an accurate picture of a software application’s security state (2005). It is also important to define the approach that will be used. It is recommended to use a risk-based approach as defined by McGraw and Potter.
Penetration testers must have a clearly defined goal, limitations, parameters, and a plan of action so that their testing will produce useful results in a cost-effective manner which mainly comes down to the test being timely and not producing ill-effects on the systems being tested. Without this it will be difficult to perform the test and it will harder to provide accurate results companies can use to improve their systems.
Tools can be used to test components in several of these categories. Penetration testers should determine which tools to use in this preparation stage. If the source code for an application is available, source code analyzers can be very useful however the results from some tools will be limited depending on the programming language used in developing software (Curphey and Araujo). Source code analyzers are also limited in that they do not analyze multiple lines of code and thus can only find problems within a single line of code (2006).
Another useful tool is the web application or black box scanner. The black box scanner (Curphey and Araujo) is useful for finding some of the most obvious problems with a web-enabled application. It can perform a variety of attacks including brute force authentication attempts. However, it is limited in that it can only probe from the customer perspective and black box scanners tend to find only about five percent of the security flaws but they can the penetration tester time so that he or she can concentrate on deeper testing (p.38). Another component of web applications are in their configuration. Configuration analysis tools (p. 40) exist to find problems in configuration files such as the web.xml, machine.config, and web.config files and web server settings (2006).
Database scanners are also a good tool for penetration testers. They are specifically targeted at the database, an important component of most applications. Database scanners are effective at finding (Curphey and Araujo) configuration problems, issues with stored procedures, users and permissions, and vulnerabilities in enabled features (2006). Some penetration testers also use runtime analysis tools to view the parameters that are being passed to and from functions. The tools do not pinpoint vulnerabilities themselves but they are a good aid for experienced penetration testers (p. 39) as they give the tester the ability to see how the components of a program interact and how changes in one area affect the flow of data in the program.
2. Risk-based Approach
Penetration testing is much more effective for testing software for security flaws when using the proper approach. In order to achieve this more effective test, (McGraw and Potter) the approach to penetration testing should be risk-based in which risks are analyzed from the beginning of the development of the software (2004). Penetration testing and risk analysis at the design state can uncover design-level vulnerabilities such as (Potter and McGraw) object sharing issues, exposed data conduits, absent controls, or a lack of auditing (2004). Similarly, Arce and McGraw state that penetration testing should have risk analysis as its foundation (2004). Problems with the software architecture are identified at the beginning of the lifecycle so that they can be corrected and they can be corrected more cost effectively than if they were identified towards the end of the project (McGraw and Potter, 2004). Identified risks and problems are continually probed along the way.
Understanding hacking is increasingly important for the security practitioner. It is with the knowledge of hacking that security practitioners truly understand the vulnerabilities in the systems they manage. Hacking requires an individual to think differently. Hackers do not accept things at face value. They will discovery how things work and explore the underlying technologies that systems and software is built upon. They will attempt to find exploits for uncommon possibilities and continually test software to see how it reacts to new input.
Hacking skills can be used for good by conducting penetration tests on approved systems and applications and it is recommended to perform penetration testing throughout the life cycle of an application or system. Penetration testing and hacking require some of the same skills including discovery, stealth, the use of malware, fuzzing, password cracking, privilege escalation, and denial of service.
- Arce, I., & McGraw, G., (2004). Why Attacking Systems Is a Good Idea. IEEE Security and Privacy. 2(4). 17-19.
- Arkin, B., Stender, S., & McGraw, G. (2005). Software Penetration Testing. IEEE Security and Privacy. 3(1). 84-87.
- Bishop, M. (2007). About Penetration Testing. IEEE Security and Privacy. 5(6). 84-87.
- Bratus, S. (2007). Hacker Curriculum : How Hackers Learn Networking. IEEE Distributed Systems Online. 8(10).
- Bratus, S. (2007). What Hackers Learn that the Rest of Us Don’t: Notes on Hacker Curriculum. IEEE Security and Privacy. 5(4). 72-75.
- Chirillo, J. (2001). Hack attacks revealed. New York, NY: Wiley Computer Publishing.
- Curphey, M., & Araujo, R. (2006). Web Application Security Assessment Tools. IEEE Security and Privacy. 4(4). 32-41.
- Erickson, J. (2003) Hacking: The art of exploitation. San Francisco, CA: No Starch Press.
- Fadia, A. (2003). Network security: A hacker’s perspective. Cincinnati, OH: Premier Press.
- Ford, R., & Allen, W. (2007). How Not to Be Seen. IEEE Security and Privacy. 5(1). 67-69.
- Gordon, S. (2006). Understanding the Adversary: Virus Writers and Beyond. IEEE Security and Privacy. 4(5). 67-70.
- Oehlert, P. (2005). Violating Assumptions with Fuzzing. IEEE Security and Privacy. 3(2). 58-62.
- Matthew, T. (2003). Ethical Hacking. New York, NY: International Council of E-Commerce Consultants.
- McGraw, G., & Potter, B. (2004). Software Security Testing. IEEE Security and Privacy. 2(5). 81-85.
- McFedries, P. (2004). Hacking Unplugged. IEEE Spectrum Online. Retrieved July 30, 3008 from http://www.spectrum.ieee.org/feb04/3879
- Mitnick, K., Simon, W. (2002). The Art of Deception: Controlling the Human Element of Security. Hoboken, NJ: Wiley.
- Perrone, L. (2007). Could a Caveman Do It? The Surprising Potential of Simple Attacks. IEEE Security and Privacy. 5(6). 74-77.
- Pincus, J., & Baker, B. (2004). Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns. IEEE Security and Privacy. 2(4). 20-27.
- Ring, S., & Cole, E. (2004). Taking a Lesson from Stealthy Rootkits. IEEE Security and Privacy. 2(4). 38-45.
- Seacord, R. (2005). Secure Coding in C and C++. Indianapolis, IN: Addison Wesley Professional.
- White, G., & Conklin, A. (2004). The Appropriate Use of Force-on-Force Cyberexercises. IEEE Security and Privacy. 2(4). 33-37.