Tuesday, July 14, 2009
Encryption Algorithms
Monday, July 6, 2009
The ISO27001 Certification Process
- Systematically examines the organization's information security risks, taking account of the threats, vulnerabilities and impacts;
- Designs and implements a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that it deems unacceptable; and
- Adopts an overarching management process to ensure that the information security controls continue to meet the organization's information security needs on an ongoing basis.
- Stage 1 is a preliminary, informal review of the ISMS, for example checking the existence and completeness of key documentation such as the organization's information security policy, Statement of Applicability (SoA) and Risk Treatment Plan (RTP). This stage serves to familiarize the auditors with the organization and vice versa.
- Stage 2 is a more detailed and formal compliance audit, independently testing the ISMS against the requirements specified in ISO/IEC 27001. The auditors will seek evidence to confirm that the management system has been properly designed and implemented, and is in fact in operation (for example by confirming that a security committee or similar management body meets regularly to oversee the ISMS). Certification audits are usually conducted by ISO/IEC 27001 Lead Auditors. Passing this stage results in the ISMS being certified compliant with ISO/IEC 27001.
- Stage 3 involves follow-up reviews or audits to confirm that the organization remains in compliance with the standard. Certification maintenance requires periodic re-assessment audits to confirm that the ISMS continues to operate as specified and intended. These should happen at least annually but (by agreement with management) are often conducted more frequently, particularly while the ISMS is still maturing.
Sunday, July 5, 2009
Top 10 Worst Computer Worms of All Time
10. Jerusalem (also known as BlackBox)
Discovered in 1987, Jerusalem is one of the earliest worms. It is also one of the most commonly known viruses, deleting files that are executed on each Friday the 13th. Its name comes from the city in which it was first detected, the city of Jerusalem.
The worm, which infects DOS, increases the file size of all files run within DOS (with the exception of COMMAND.COM).
Jerusalem is a variant of the Suriv virus, which also deletes files at random periods during the year (April Fool's Day and/or Friday the 13th depending on the variant). The Jerusalem worm inspired a host of similar worms that grow by a specified file size when executed. Another variant, Frère, plays the song Frère Jacques on the 13th day of the month.
While Jerusalem and its relatives were quite common in their day, they became less of a threat when Windows was introduced.
9. Michelangelo
In 1991, thousands of machines running MS-DOS were hit by a new worm, one which was scheduled to be activated on the artist Michelangelo's birthday (March 6th). On that day, the virus would overwrite the hard disk or change the master boot record of infected hosts.
When the worm came to mainstream attention, mass hysteria reigned and millions of computers were believed to be at risk. After March 6th, however, it was realized that the damage was minimal. Only 10,000 to 20,000 cases of data loss were reported.
Ironically, however, because of the media hype, the period before March 6, 1992 became known as "Michelangelo Madness," with users buying anti-virus software in droves, some for the very first time. In a way, the "madness" led many people to prepare for the outbreak and helped minimize the actual damage caused by the worm.
8. Storm Worm
One of the newest worms to hit the Internet was the Storm Worm, which debuted in January of 2007. Its name came from a widely circulated email about the Kyrill weather storm in Europe, and its subject was "230 dead as storm batters Europe." The virus first hit on January 19th, and three days later, the virus accounted for 8% of all infected machines.
If your computer was infected by the Storm Worm, your machine became part of a large botnet. The botnet acted to perform automated tasks that ranged from gathering data on the host machine, to DDOSing websites, to sending infected emails to others. As of September of this year, an estimated 1 million to 10 million computers were still part of this botnet, and each of these computers was infected by one of the 1.2 billion emails sent from the infected hosts.
Storm Worm is a difficult worm to track down because the botnet is decentralized and the computers that are part of the botnet are consistently being updated with the fast flux DNS technique. Consequently, it has been difficult for infected machines to be isolated and cleaned.
7. Sobig
In 2003, millions of computers were infected with the Sobig worm and its variants. The worm was disguised as a benign email. The attachment was often a *.pif or *.scr file that would infect any host if downloaded and executed. Sobig-infected hosts would then activate their own SMTP host, gathering email addresses and continually propagating through additional messages.
Sobig depended heavily on public websites to execute additional stages of the virus. Fortunately, in earlier cases, these sites were shut down after the discovery of the worm. Later, when Geocities was found to be the primary hosting point for Sobig variants, the worm would instead communicate with cable modems that were hacked that would later serve as another stage in the worm's execution.
The result? Sobig infected approximately 500,000 computers worldwide and cost as much as $1 billion in lost productivity.
6. MSBlast
The summer of 2003 wasn't much easier for those building anti-virus definitions or those at businesses or academic institutions. In July of that year, Microsoft announced a vulnerability within Windows. A month later, that vulnerability was exploited. This worm was called MSBlast, a name created by the worm's author, and it included a personal message from the author to Bill Gates. The note read, "billy gates why do you make this possible? Stop making money and fix your software!!"
When MSBlast hit, it installed a TFTP (Trivial File Transfer Protocol) server and downloaded code onto the infected host. Within several hours of its discovery, it had hit nearly 7,000 computers. Six months later, over 25 million hosts were known to be infected. The Windows Blaster Worm Removal Tool was finally launched by Microsoft in January of 2004 to remove traces of the worm.
A 19-year-old from Minnesota, Jeffrey Lee Parson, was arrested and sentenced to 18 months in prison with 10 months of community service after launching a variant of the MSBlast worm that affected nearly 50,000 computers.
5. Melissa
Want porn but don't have any? In 1999, hungry and curious minds downloaded a file called List.DOC in the alt.sex Usenet discussion group, assuming that they were getting free access to over 80 pornographic websites. Little did they know that the file within was responsible for mass-mailing thousands of recipients and shutting down nearly the entire Internet.
You get what you pay for.
Melissa spread through Microsoft Word 97 and Word 2000, mass emailing the first 50 entries from a user's address book in Outlook 97/98 when the document was opened. The Melissa worm randomly inserted quotes from The Simpsons TV show into documents on the host computer and deleted critical Windows files.
The Melissa worm caused $1 billion in damages. Melissa's creator, a David Smith from New Jersey, named the worm after a lap dancer he met while vacationing in Florida. Smith was imprisoned for 20 months and fined $5,000.
4. Code Red
Friday the 13th was a bad day in July of 2001; it was the day Code Red was released. The worm took advantage of a buffer overflow vulnerability in Microsoft IIS servers and would self-replicate by exploiting the same vulnerability in other Microsoft IIS machines. Web servers infected by the Code Red worm would display the following message:
HELLO! Welcome to http://www.worm.com! Hacked By Chinese!
After 20 to 27 days, infected machines would attempt to launch a denial of service on many IP addresses, including the IP address of www.whitehouse.gov.
Code Red and its successor, Code Red II, are known as two of the most expensive worms in Internet history, with damages estimated at $2 billion and at a rate of $200 million in damages per day.
3. Nimda
In the fall of 2001, Nimda ("admin" spelled backwards) infected a variety of Microsoft machines very rapidly through an email exploit. Nimda spread by finding email addresses in .html files located in the user's web cache folder and by looking at the user's email contacts as retrieved by the MAPI service. The consequences were heavy: all web related files were appended with Javascript that allowed further propagation of the worm, users' drives were shared without their consent, and "Guest" user accounts with Administrator privileges were created and enabled.
A market research firm estimated that Nimda caused $530 million in damages after only one week of propagation.
Several months later, reports indicated that Nimda was still a threat.
2. ILOVEYOU (also known as VBS/Loveletter or Love Bug Worm)
You may have gotten an email in 2000 with the subject line "ILOVEYOU." If you deleted it, you were safe from one of the most costly worms in computer history. The attachment in that email, a file called LOVE-LETTER-FOR-YOU.TXT.vbs, started a worm that spread like wildfire by accessing email addresses found in users' Outlook contact lists. Unsuspecting recipients, believing the email to be benign, would execute the document only to have most of their files overwritten.
The net result was an estimated $5.5 billion to $8.7 billion in damages. Ten percent of all Internet-connected computers were hit.
Onel A. de Guzman, the creator of the virus and a resident of the Philippines, had all charges dropped against him for creating the worm because there were no laws at the time prohibiting the creation of computer worms. Since then, the government of the Philippines has laid out penalties for cybercrime that include imprisonment for 6 months to 3 years and a fine of at least 100,000 pesos (USD $2000).
1. Morris Worm (also known as the Great Worm)
How big is the Internet, you ask? In 1988, Cornell University student named Robert Tappan Morris launched 99 lines of code in his quest for the answer. While his intentions were not malicious, there were bugs in his code that caused affected hosts to encounter a plethora of stability problems that effectively made these systems unusable. The result was increased load averages on over 6,000 UNIX machines across the country which caused between $10,000,000 and $100,000,000 of damage.
FIREWALLS
A firewall is basically the first line of defense for your network. The basic purpose of a firewall is to keep uninvited guests from browsing your network. A firewall can be a hardware device or a software application and generally is placed at the perimeter of the network to act as the gatekeeper for all incoming and outgoing traffic.
A firewall allows you to establish certain rules to determine what traffic should be allowed in or out of your private network. Depending on the type of firewall implemented you could restrict access to only certain IP addresses or domain names, or you can block certain types of traffic by blocking the TCP/IP ports they use.
There are basically four mechanisms used by firewalls to restrict traffic. One device or application may use more than one of these in conjunction with each other to provide more in-depth protection. The four mechanisms are packet-filtering, circuit-level gateway, proxy server and application gateway.
A packet filter intercepts all traffic to and from the network and evaluates it against the rules you provide. Typically the packet filter can assess the source IP address, source port, destination IP address and destination port. It is these criteria that you can filter on- allowing or disallowing traffic from certain IP addresses or on certain ports.
A circuit-level gateway blocks all incoming traffic to any host but itself. Internally, the client machines run software to allow them to establish a connection with the circuit-level gateway machine. To the outside world it appears that all communication from your internal network is actually originating from the circuit-level gateway.
A proxy server is generally put in place to boost performance of the network, but can act as a sort of firewall as well. Proxy servers also hide your internal addresses as well so that all communications appear to originate from the proxy server itself. A proxy server will cache pages that have been requested. If User A goes to Yahoo.com the proxy server actually sends the request to Yahoo.com and retrieves the web page. If User B then connects to Yahoo.com the proxy server just sends the information it already retrieved for User A so it is returned much faster than having to get it from Yahoo.com again. You can configure a proxy server to block access to certain web sites and filter certain port traffic to protect your internal network.
An application gateway is essentially another sort of proxy server. The internal client first establishes a connection with the application gateway. The application gateway determines if the connection should be allowed or not and then establishes a connection with the destination computer. All communications go through two connections- client to application gateway and application gateway to destination. The application gateway monitors all traffic against its rules before deciding whether or not to forward it. As with the other proxy server types, the application gateway is the only address seen by the outside world so the internal network is protected.
Each of these mechanisms has its drawbacks as well as its advantages. The application gateway is considered to be a more advanced and secure firewall mechanism than the other three, but it uses more resources (memory and processor power) and can be slower. Packet filtering is generally faster and easier to implement, but is susceptible to attack from users faking their source IP address (IP spoofing)or source port to trick your firewall into thinking that the traffic should be allowed through.
To beef up packet filtering security, stateful inspection packet filtering, or stateful packet filtering (SPF) was introduced. Essentially, SPF performs the same as a packet filter, but with a couple of added measures. First, it looks at more details from each packet to determine what is contained within the packet rather than simply who and where it is from (or allegedly from). Second, it monitors communications between the two devices and compares the traffic not only to the rules it has been given, but also to the previous communications. If any communication seems out of context or out of the ordinary based on previous traffic the packet is rejected.
Many home routers come with built-in firewall capabilities. Generally, these tend to be simple packet filters. You can block all incoming connections on all ports if you are not acting as a server for anything. If you want to publish a web page from your computer, you would need to allow incoming traffic on Port 80 to get to your computer. If you want to be able to download files from your computer from outside using FTP, you would need to allow incoming connections on Port 21. A basic rule of security though is to start with the most restrictive and only open holes where it seems necessary.
In addition to the hardware firewall built into routers, there are also software applications called personal firewalls that you can run on your computer. These personal firewall applications monitor all incoming and outgoing communications on your computer as well as what services are trying to interact with what other services.
There are new vulnerabilities and flaws discovered everyday which could allow a hacker to break into your computer, take control of it for use in a denial-of-service attack or steal or destroy your data. Keeping your software patched and running updated antivirus software are very important pieces of the puzzle, but having a firewall block incoming connections in the first place is definitely a wise idea as well. No one security solution will solve everything. The more lines of defense you have in place, the harder it is for hackers to get in and the safer you will be.
Packet Sniffing
A packet sniffer, sometimes referred to as a network monitor or network analyzer, can be used legitimately by a network or system administrator to monitor and troubleshoot network traffic. Using the information captured by the packet sniffer an administrator can identify erroneous packets and use the data to pinpoint bottlenecks and help maintain efficient network data transmission.
In its simple form a packet sniffer simply captures all of the packets of data that pass through a given network interface. Typically, the packet sniffer would only capture packets that were intended for the machine in question. However, if placed into promiscuous mode, the packet sniffer is also capable of capturing ALL packets traversing the network regardless of destination.
By placing a packet sniffer on a network in promiscuous mode, a malicious intruder can capture and analyze all of the network traffic. Within a given network, username and password information is generally transmitted in clear text which means that the information would be viewable by analyzing the packets being transmitted.
A packet sniffer can only capture packet information within a given subnet. So, its not possible for a malicious attacker to place a packet sniffer on their home ISP network and capture network traffic from inside your corporate network (although there are ways that exist to more or less "hijack" services running on your internal network to effectively perform packet sniffing from a remote location). In order to do so, the packet sniffer needs to be running on a computer that is inside the corporate network as well. However, if one machine on the internal network becomes compromised through a Trojan or other security breach, the intruder could run a packet sniffer from that machine and use the captured username and password information to compromise other machines on the network.
Detecting rogue packet sniffers on your network is not an easy task. By its very nature the packet sniffer is passive. It simply captures the packets that are traveling to the network interface it is monitoring. That means there is generally no signature or erroneous traffic to look for that would identify a machine running a packet sniffer. There are ways to identify network interfaces on your network that are running in promiscuous mode though and this might be used as a means for locating rogue packet sniffers.
If you are one of the good guys and you need to maintain and monitor a network, I recommend you become familiar with network monitors or packet sniffers such as Ethereal. Learn what types of information can be discerned from the captured data and how you can put it to use to keep your network running smoothly. But, also be aware that users on your network may be running rogue packet sniffers, either experimenting out of curiosity or with malicious intent, and that you should do what you can to make sure this does not happen.
What is Port Scanning?
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two of the protocols that make up the TCP/IP protocol suite which is used universally to communicate on the Internet. Each of these has ports 0 through 65535 available so essentially there are more than 65,000 doors to lock.
The first 1024 TCP ports are called the Well-Known Ports and are associated with standard services such as FTP, HTTP, SMTP or DNS. Some of the addresses over 1023 also have commonly associated services, but the majority of these ports are not associated with any service and are available for a program or application to use to communicate on.
Port scanning software, in its most basic state, simply sends out a request to connect to the target computer on each port sequentially and makes a note of which ports responded or seem open to more in-depth probing.
If the port scan is being done with malicious intent, the intruder would generally prefer to go undetected. Network security applications can be configured to alert administrators if they detect connection requests across a broad range of ports from a single host. To get around this the intruder can do the port scan in strobe or stealth mode. Strobing limits the ports to a smaller target set rather than blanket scanning all 65536 ports. Stealth scanning uses techniques such as slowing the scan. By scanning the ports over a much longer period of time you reduce the chance that the target will trigger an alert.
By setting different TCP flags or sending different types of TCP packets the port scan can generate different results or locate open ports in different ways. A SYN scan will tell the port scanner which ports are listening and which are not depending on the type of response generated. A FIN scan will generate a response from closed ports- but ports that are open and listening will not send a response, so the port scanner will be able to determine which ports are open and which are not.
There are a number of different methods to perform the actual port scans as well as tricks to hide the true source of port scan. You can read more about some of these by visiting these web sites: Port Scanning or Network Probes Explained.
It is possible to monitor your network for port scans. The trick, as with most things in information security, is to find the right balance between network performance and network safety. You could monitor for SYN scans by logging any attempt to send a SYN packet to a port that isn't open or listening. However, rather than being alerted every time a single attempt occurs- and possibly being awakened in the middle of the night for an otherwise innocent mistake- you should decide on thresholds to trigger the alert. For instance you might say that if there are more than 10 SYN packet attempts to non-listening ports in a given minute that an alert should be triggered. You could design filters and traps to detect a variety of port scan methods- watching for a spike in FIN packets or just an anomylous number of connection attempts to a variety of ports and / or IP addresses from a single IP source.
To help ensure that your network is protected and secure you may wish to perform your own port scans. A MAJOR caveat here is to ensure you have the approval of all the powers that be before embarking on this project lest you find yourself on the wrong side of the law. To get accurate results it may be best to perform the port scan from a remote location using non-company equipment and a different ISP. Using software such as NMap you can scan a range of IP addresses and ports and find out what an attacker would see if they were to port scan your network. NMap in particular allows you to control almost every aspect of the scan and perform various types of port scans to fit your needs.
Once you find out what ports respond as being open by port scanning your own network you can begin to work on determining whether its actually necessary for those ports to be accessible from outside your network. If they're not necessary you should shut them down or block them. If they are necessary, you can begin to research what sorts of vulnerabilities and exploits your network is open to by having these ports accessible and work to apply the appropriate patches or mitigation to protect your network as much as possible.
The 7 Habits of Highlty Effective Network Professionals
1. BE BUSINESS SAVVY
The savvy Network Professional is engaged with people at all levels and departments within the organization. It isn’t enough to simply interact with your fellow IT department members; you need to engage with business line managers, product managers, and executives. By interacting with a more diverse group of co-workers, you’ll get access to the business intelligence that’s as likely to be shared during a coffee break as it is during a formal business meeting. This intimate knowledge of what’s happening in your organization will be invaluable when you’re faced with the all too common scenario of deploying your resources to meet competing goals. By developing an understanding of who the players are- and who they aren’t- you can more easily make IT decisions that will positively impact your company. For example, if you know that sales and marketing organizations are planning to increase e-commerce initiatives, you can invest early in the research, acquisition, purchase, and management of the hardware and software this channel requires. All of your internal customers are looking to IT to help solve their business issues. But how do you decide whether to implement a new collection system or a new contact management system? You’ll likely start by evaluating the current solutions. Are they working as expected? Can small changes be made to improve performance? Are the systems effectively obsolete, making any further investment of questionable value? You also need to evaluate the business environment. Which system is more urgent? Which will have a greater impact on the company’s revenue and profitability? By understanding the business realities on the ground, you can position yourself to make smarter decisions. For instance, what if you knew that the business unit seeking a new contact management system was growing at 200 percent a year, and would be responsible for the company’s next product rollout? This information makes your decision an easier one, but you can’t always count on the intelligence being readily available. Only by playing an active and visible role in you company can you develop the business savvy that will help you succeed.
2. SET EXPECTATION APPROPRIATELY
Everybody’s an expert, right? How many times have business managers come to you and told you exactly what technology solutions the need to solve their business issues? It’s a critical part of you job not to only select the most appropriate solutions, but also to set expectations properly so that your users understand how much the solution will cost, how long it will take to deploy, and exactly what it can and can’t do. Often, it’s Network Professionals who take the hit when a solution doesn’t meet expectations. It’s in your best interests to close the gap between the business side of the house and it. When managers know up front what the can expect, your job is much easier. And when you deliver in line with expectations, you’ll be putting yourself in a better position to meet the expectations of your internal customers and fulfill the requirements of your service level agreement.
3. BE FINANCIALLY PRUDENT
In order to make effective decisions, Network Professionals must understand common financial terms like Return On Investment (ROI) and Total Cost of Ownership (TCO) and be ready to discuss them with business line managers. By understanding both the up front and long-term costs of technology solutions, you’ll be better able to guide you organization in making technology choices that will positively impact the business. Managing your budget involves looking not only at expenditures, but also at expected returns. By working with business line managers to understand how the manage P&L, you become a partner who helps them achieve their business goals as you spend your budget wisely.
4. BE A TECHNOLOGY REALIST.
It’s probably not a stretch to say that you love technology. But as a Network Professional, you also need to be a technology realist. While you may admire the elegance of a new technology solution, you’re realistic enough to know that what matters for your company is how that technology can be applied to solve business problems, improve processes, and increase sales. You have to be prepared to say no to shiny new software if it can’t solve the pain points your company is experiencing. By staying up-to-date on the latest technology as well as on those coming down the road, you can separate the must haves from the want-to-haves. And in doing this, you’ll be looked at as a credible source for technology advice and road-mapping, increasing you strategic value and enhancing you career.
5. BE CREDENTIAL READY PRACTICE PROVEN
You’re working in a global community, full of people with top-notch education and certifications. Employers are selecting candidates from the international talent pool, so you need to be ale to compete. In this environment, certifications really do matter. Be sure to make advantage of employer reimbursement programs for training opportunities, but don’t be afraid to invest in getting yourself certified – you’ll quickly realize the return of this investment on your career. It’s also important to have practical experience and not be afraid to get your hands dirty. Stay on top of emerging technologies, and seize every opportunity to get involved with a new implementation to keep your skills sharp and up-to-date. Network Professionals who understand both the theory and practice of technology will see their achievements reflected in their salary and benefits.
6. BE DIPLOMATIC
In your role as a Network Professional, you’ll find yourself working with a diverse group of people in a wide variety of situations. From IT management to product managers, you’ll need to develop diplomatic skills that will allow you to navigate smoothly through your organization. Keep in mind that you’ll be called upon to explain technology to nontechnical employees and you should learn how to explain pros and cons in language the can relate to.
7. CULTIVATE AN OPTIMISTIC OUTLOOK
The job of a Network Professional is a tough one. You’re forced to more dozens of demands, expectations, and realities from internal customers throughout your organization. You’re the first person they’ll call when something goes wrong, but you may never hear about it when thing right. When you come to work in the morning in a positive frame of now your day will fly by, and you’re more likely to have a fulfilling career.
NESSUS
WIRESHARK
Wednesday, June 3, 2009
SNORT
Snort is a free and open source network intrusion prevention system (NIPS) and network intrusion detection system (NIDS) capable of performing packet logging and real-time traffic analysis on IP networks. Snort was written by Martin Roesch and is now developed by Sourcefire, of which Roesch is the founder and CTO. Integrated enterprise versions with purpose built hardware and commercial support services are sold by Sourcefire.
Snort performs protocol analysis, content searching/matching, and is commonly used to actively block or passively detect a variety of attacks and probes, such as buffer overflows, stealth port scans, web application attacks, SMB probes, and OS fingerprinting attempts, amongst other features. The software is mostly used for intrusion prevention purposes, by dropping attacks as they are taking place. Snort can be combined with other software such as SnortSnarf, sguil, OSSIM, and the Basic Analysis and Security Engine (BASE) to provide a visual representation of intrusion data. With patches for the Snort source from Bleeding Edge Threats, support for packet stream antivirus scanning with ClamAV and network abnormality with SPADE in network layers 3 and 4 is possible with historical observation. ( These patches seem to be no longer maintained )
Monday, June 1, 2009
Network Vulnerability Assessment Using Data Mining Techniques
The proposed framework would monitor the network traffic in details to analyze and classify the data connections to carry out the network vulnerability assessment of the hosts/networks.
Problem:
Due to the dynamic nature of the traffic characteristics, ever-changing network environment, the network vulnerability assessment has been proven to be complex, erroneous, costly and inefficient for many large-networked organizations.
This framework will compose a set of techniques and algorithms to assess the network vulnerabilities with the help of data mining techniques.
One of the main problems that to be addressed that how much the network vulnerability assessments are useful, up-to-dated, well-organized or efficient to reflect the current characteristics of network traffics.
Objective:
The main objective is to prepare a set of techniques and algorithms to analysis and assess the network vulnerabilities.
(1) Data mining technique to deduce network vulnerabilities by mining its network traffic log based on its frequency and the behaviors,
(2) A technique to identify the dominant vulnerabilities and any decaying vulnerabilities with the time
The secondary objective is to prepare a portable network vulnerability analyzer, which can be used to monitor/analyze vulnerabilities of the network traffic generated by networks/network nodes. This device is supposed to be connected to the network port of the computer/PC without changing the clients network topology configurations. The proposed toolkit may be able to sit between the LAN and the LAN's exit point, generally the WAN or Internet router, and all packets leaving and entering the network would go through them. In most cases the toolkit would operate as a bridge on the network so that it is undetectable by users.
Deliverables
1. A set of techniques and algorithms to deduce network vulnerabilities by mining its network traffic log based on its frequency and the behaviors.
2. A new software toolkit to analyze the network traffic for troubleshooting purposes while detecting unwanted traffic like worm/virus traffic etc. A portable toolkit that is capable to analyze and troubleshoot the problems may cause due to worm/virus attacks/intrusion attacks.
3. A detail study of the existing/common network traffic analysis and classification techniques.
Methodology
Various software tools are available to measure network traffic. Some tools measure traffic by sniffing and others use SNMP like methods to measure bandwidth use on servers and routers etc. However, for certain vulnerability assessment work may need to analyze the traffic in detail. Since it is required to position a traffic analyzer in different locations in the network to carry out the network vulnerability detections. So it is necessary to place a device with proper software toolkits, which doesn’t disturb the network topology and should be able to setup fairly fast.
Further, the packet sniffers are very useful for network experts tracking down tricky problems. But the volume of information they generate is enormous. A fast broadband connection can transmit thousands or millions of packets per second, and inspecting each one in detail is unlikely to help you make your network faster. In addition, understanding the output of these analyzers requires a detailed understanding of network protocols such as TCP/IP and HTTP. A protocol level broad overview would be useful, at least as a starting point for tracking down the network vulnerabilities of their networks.
In the research, I would like to introduce a new technique to the process of network vulnerability assessment using data mining techniques which consisting of anomaly detection, generalization and rules for data mining using frequency-based techniques. The steps are in summary, (1) to provide a capacity to reflect current trend of network traffic and thus to assess the network vulnerabilities if it contains in real time from traffic log data files, (2) to provide a tool to analyze its traffic patterns for the further analysis and anomaly detection including those hidden vulnerabilities, and for the decision making, (3) to apply various data mining techniques to handle both discrete and continuous attributes with operational efficiency and flexibility, and (4) to demonstrate the merit of data mining based algorithms not only feasible but also more accurate and effective (as traffic log dataset gets larger in size and variation in projection).
The anomaly detection based on the mining exposes many hidden vulnerabilities, not only those types of the anomalies detectable by analyzing the traffic logs for a long time periods but also those anomalies not detectable by analyzing the traffic logs for short periods. As a result, this analysis may conclude new types of the anomalies in the networks.
In conclusion, the data mining will be shown as one of the viable options but also a practical, effective and critical approach in network vulnerability assessment in the real time.
References
1. TANDI: Threat Assessment of Network Data and Information
By Jared Holsopple, Shanchieh Jay Yang, and Moises Sudit
2. A Graph-Based System for Network-Vulnerability Analysis
By Cynthia Phillips, Laura Painton Swiler
3. Scalable, Graph-Based Network Vulnerability Analysis∗
By Paul Ammann, Duminda Wijesekera, Saket Kaushik
4. Managing a Network Vulnerability Assessment
By Thomas R. Peltier, Justin Peltier and John A. Blackley
ISBN:0849312701
Auerbach Publications 2003
5. Network vulnerability assessment using Bayesian networks
By Yu Liu, Hong Man
6. Worm Traffic Analysis and Characterization
By Dainotti A, Pescap A, Ventre G.
Univ. of Napoli Federico II, Naples