Tuesday, July 14, 2009

Encryption Algorithms

Encryption algorithms are commonly used in computer communications. Usually they are used to provide secure transfers. If an algorithm is used in a transfer, the file is first translated into a seemingly meaningless cipher text and then transferred in this configuration; the receiving computer uses a key to translate the cipher into its original form. So if the message or file is intercepted before it reaches the receiving computer it is in an unusable (or encrypted) form.

Here are some commonly used algorithms:

DES/3DES or TripleDES
This is an encryption algorithm called Data Encryption Standard that was first used by the U.S. Government in the late 70's. It is commonly used in ATM machines (to encrypt PINs) and is utilized in UNIX password encryption. Triple DES or 3DES has replaced the older versions as a more secure method of encryption, as it encrypts data three times and uses a different key for at least one of the versions.

Blowfish
Blowfish is a symmetric block cipher that is unpatented and free to use. It was developed by Bruce Schneier and introduced in 1993.

AES
Advanced Encryption Standard or Rijndael; it uses the Rijndael block cipher approved by the National Institute of Standards and Technology (NIST). AES was originated by cryptographers Joan Daemen and Vincent Rijmen and replaced DES as the U.S. Government encryption technique in 2000.

Twofish
Twofish is a block cipher designed by Counterpane Labs. It was one of the five Advanced Encryption Standard (AES) finalists and is unpatented and open source.

IDEA
This encryption algorithm was used in Pretty Good Privacy (PGP) Version 2 and is an optional algorithm in OpenPGP. IDEA features 64–bit blocks with a 128–bit key.

MD5
MD5 was developed by Professor Ronald Riverst and was used to create digital signatures. It is a one–way hash function and intended for 32–bit machines. It replaced the MD4 algorithm.

SHA–1
SHA–1 is a hashing algorithm similar to MD5, yet SHA–1 may replace MD5 since it offers more security

HMAC
This is a hashing method similar to MD5 and SHA–1, sometimes referred to as HMAC–MD5 and HMAC–SHA1.

RSA Security
RC4– RC4 is a variable key–size stream cipher based on the use of a random permutation.
RC5– This is a parameterized algorithm with a variable block, key size and number of rounds.
RC6– This an evolution of RC5, it is also a parameterized algorithm that has variable block, key and a number of rounds. This algorithm has integer multiplication and 4–bit working registers.
 
References
MyCrypto.net. (n.d.). Encryption algorithms. Retrieved October 25, 2006, from http://www.mycrypto.net/encryption/crypto_algorithms.html

Webopedia. (2006). What is encryption algorithm? – A Word Definition From the Webopedia Computer Dictionary. Jupitermedia Corporation. Retrieved October 25, 2006, from http://www.webopedia.com/TERM/E/encryption_algorithm.html

Monday, July 6, 2009

The ISO27001 Certification Process

ISO/IEC 27001, part of the growing ISO/IEC 27000 family of standards, is an Information Security Management System (ISMS) standard published in October 2005 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Its full name is ISO/IEC 27001:2005 - Information technology -- Security techniques -- Information security management systems -- Requirements but it is commonly known as "ISO 27001".

ISO/IEC 27001 formally specifies a management system that is intended to bring information security under explicit management control. Being a formal specification means that it mandates specific requirements. Organizations that claim to have adopted ISO/IEC 27001 can therefore be formally audited and certified compliant with the standard (more below).
Most organizations have a number of information security controls. Without an ISMS however, the controls tend to be somewhat disorganized and disjointed, having been implemented often as point solutions to specific situations or simply as a matter of convention. Maturity models typically refer to this stage as "ad hoc". The security controls in operation typically address certain aspects of IT or data security, specifically, leaving non-IT information assets (such as paperwork and proprietary knowledge) less well protected on the whole. Business continuity planning and physical security, for examples, may be managed quite independently of IT or information security while Human Resources practices may make little reference to the need to define and assign information security roles and responsibilities throughout the organization.

ISO/IEC 27001 requires that management:
  • Systematically examines the organization's information security risks, taking account of the threats, vulnerabilities and impacts;
  • Designs and implements a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that it deems unacceptable; and
  • Adopts an overarching management process to ensure that the information security controls continue to meet the organization's information security needs on an ongoing basis.
While other sets of information security controls may potentially be used within an ISO/IEC 27001 ISMS as well as, or even instead of, ISO/IEC 27002 (the Code of Practice for Information Security Management), these two standards are normally used together in practice. Annex A to ISO/IEC 27001 succinctly lists the information security controls from ISO/IEC 27002, while ISO/IEC 27002 provides additional information and implementation advice on the controls.
Organizations that implement a suite of information security controls in accordance with ISO/IEC 27002 are likely simultaneously to meet many of the requirements of ISO/IEC 27001, but may lack some of the overarching management system elements. The converse is also true, in other words an ISO/IEC 27001 compliance certificate provides assurance that the management system for information security is in place, but says little about the absolute state of information security within the organization. Technical security controls such as antivirus and firewalls are not normally audited in ISO/IEC 27001 certification audits: the organization is essentially presumed to have adopted all necessary information security controls since the overall ISMS is in place and is deemed adequate by satisfying the requirements of ISO/IEC 27001. Furthermore, management determines the scope of the ISMS for certification purposes and may limit it to, say, a single business unit or location. The ISO/IEC 27001 certificate does not necessarily mean the remainder of the organization, outside the scoped area, has an adequate approach to information security management.

Other standards in the ISO/IEC 27000 family of standards provide additional guidance on certain aspects of designing, implementing and operating an ISMS, for example on information security risk management (ISO/IEC 27005).


Certification

ISMSs may be certified compliant with ISO/IEC 27001 by a number of Accredited Registrars worldwide. Certification against any of the recognized national variants of ISO/IEC 27001 (e.g. JIS Q 27001, the Japanese version) by an accredited certification body is functionally equivalent to certification against ISO/IEC 27001 itself.

In some countries, the bodies which verify conformity of management systems to specified standards are called "certification bodies", in others they are known as "registration bodies", "assessment and registration bodies", "certification/ registration bodies", and sometimes "registrars".

The ISO/IEC 27001 certification[1], like other ISO management system certifications, usually involves a three-stage audit process:
  • Stage 1 is a preliminary, informal review of the ISMS, for example checking the existence and completeness of key documentation such as the organization's information security policy, Statement of Applicability (SoA) and Risk Treatment Plan (RTP). This stage serves to familiarize the auditors with the organization and vice versa.
  • Stage 2 is a more detailed and formal compliance audit, independently testing the ISMS against the requirements specified in ISO/IEC 27001. The auditors will seek evidence to confirm that the management system has been properly designed and implemented, and is in fact in operation (for example by confirming that a security committee or similar management body meets regularly to oversee the ISMS). Certification audits are usually conducted by ISO/IEC 27001 Lead Auditors. Passing this stage results in the ISMS being certified compliant with ISO/IEC 27001.
  • Stage 3 involves follow-up reviews or audits to confirm that the organization remains in compliance with the standard. Certification maintenance requires periodic re-assessment audits to confirm that the ISMS continues to operate as specified and intended. These should happen at least annually but (by agreement with management) are often conducted more frequently, particularly while the ISMS is still maturing.

Sunday, July 5, 2009

Top 10 Worst Computer Worms of All Time

The Internet is an Internet lover's paradise, a gamer's haven, a business's lifeline, and a hacker's playground. Over the past two decades, hundreds of worms have devastated the infrastructure of millions of computers around the world, causing billions of dollars of damage-and the life of the worm is far from over. Let's take a look at the last 20 years to see which of these worms have stood out from among the rest.

10. Jerusalem (also known as BlackBox)
Discovered in 1987, Jerusalem is one of the earliest worms. It is also one of the most commonly known viruses, deleting files that are executed on each Friday the 13th. Its name comes from the city in which it was first detected, the city of Jerusalem.
The worm, which infects DOS, increases the file size of all files run within DOS (with the exception of COMMAND.COM).
Jerusalem is a variant of the Suriv virus, which also deletes files at random periods during the year (April Fool's Day and/or Friday the 13th depending on the variant). The Jerusalem worm inspired a host of similar worms that grow by a specified file size when executed. Another variant, Frère, plays the song Frère Jacques on the 13th day of the month.
While Jerusalem and its relatives were quite common in their day, they became less of a threat when Windows was introduced.

9. Michelangelo
In 1991, thousands of machines running MS-DOS were hit by a new worm, one which was scheduled to be activated on the artist Michelangelo's birthday (March 6th). On that day, the virus would overwrite the hard disk or change the master boot record of infected hosts.
When the worm came to mainstream attention, mass hysteria reigned and millions of computers were believed to be at risk. After March 6th, however, it was realized that the damage was minimal. Only 10,000 to 20,000 cases of data loss were reported.
Ironically, however, because of the media hype, the period before March 6, 1992 became known as "Michelangelo Madness," with users buying anti-virus software in droves, some for the very first time. In a way, the "madness" led many people to prepare for the outbreak and helped minimize the actual damage caused by the worm.

8. Storm Worm
One of the newest worms to hit the Internet was the Storm Worm, which debuted in January of 2007. Its name came from a widely circulated email about the Kyrill weather storm in Europe, and its subject was "230 dead as storm batters Europe." The virus first hit on January 19th, and three days later, the virus accounted for 8% of all infected machines.
If your computer was infected by the Storm Worm, your machine became part of a large botnet. The botnet acted to perform automated tasks that ranged from gathering data on the host machine, to DDOSing websites, to sending infected emails to others. As of September of this year, an estimated 1 million to 10 million computers were still part of this botnet, and each of these computers was infected by one of the 1.2 billion emails sent from the infected hosts.

Storm Worm is a difficult worm to track down because the botnet is decentralized and the computers that are part of the botnet are consistently being updated with the fast flux DNS technique. Consequently, it has been difficult for infected machines to be isolated and cleaned.

7. Sobig
In 2003, millions of computers were infected with the Sobig worm and its variants. The worm was disguised as a benign email. The attachment was often a *.pif or *.scr file that would infect any host if downloaded and executed. Sobig-infected hosts would then activate their own SMTP host, gathering email addresses and continually propagating through additional messages.
Sobig depended heavily on public websites to execute additional stages of the virus. Fortunately, in earlier cases, these sites were shut down after the discovery of the worm. Later, when Geocities was found to be the primary hosting point for Sobig variants, the worm would instead communicate with cable modems that were hacked that would later serve as another stage in the worm's execution.
The result? Sobig infected approximately 500,000 computers worldwide and cost as much as $1 billion in lost productivity.

6. MSBlast
The summer of 2003 wasn't much easier for those building anti-virus definitions or those at businesses or academic institutions. In July of that year, Microsoft announced a vulnerability within Windows. A month later, that vulnerability was exploited. This worm was called MSBlast, a name created by the worm's author, and it included a personal message from the author to Bill Gates. The note read, "billy gates why do you make this possible? Stop making money and fix your software!!"
When MSBlast hit, it installed a TFTP (Trivial File Transfer Protocol) server and downloaded code onto the infected host. Within several hours of its discovery, it had hit nearly 7,000 computers. Six months later, over 25 million hosts were known to be infected. The Windows Blaster Worm Removal Tool was finally launched by Microsoft in January of 2004 to remove traces of the worm.
A 19-year-old from Minnesota, Jeffrey Lee Parson, was arrested and sentenced to 18 months in prison with 10 months of community service after launching a variant of the MSBlast worm that affected nearly 50,000 computers.

5. Melissa
Want porn but don't have any? In 1999, hungry and curious minds downloaded a file called List.DOC in the alt.sex Usenet discussion group, assuming that they were getting free access to over 80 pornographic websites. Little did they know that the file within was responsible for mass-mailing thousands of recipients and shutting down nearly the entire Internet.
You get what you pay for.
Melissa spread through Microsoft Word 97 and Word 2000, mass emailing the first 50 entries from a user's address book in Outlook 97/98 when the document was opened. The Melissa worm randomly inserted quotes from The Simpsons TV show into documents on the host computer and deleted critical Windows files.
The Melissa worm caused $1 billion in damages. Melissa's creator, a David Smith from New Jersey, named the worm after a lap dancer he met while vacationing in Florida. Smith was imprisoned for 20 months and fined $5,000.

4. Code Red
Friday the 13th was a bad day in July of 2001; it was the day Code Red was released. The worm took advantage of a buffer overflow vulnerability in Microsoft IIS servers and would self-replicate by exploiting the same vulnerability in other Microsoft IIS machines. Web servers infected by the Code Red worm would display the following message:
HELLO! Welcome to http://www.worm.com! Hacked By Chinese!
After 20 to 27 days, infected machines would attempt to launch a denial of service on many IP addresses, including the IP address of www.whitehouse.gov.
Code Red and its successor, Code Red II, are known as two of the most expensive worms in Internet history, with damages estimated at $2 billion and at a rate of $200 million in damages per day.

3. Nimda
In the fall of 2001, Nimda ("admin" spelled backwards) infected a variety of Microsoft machines very rapidly through an email exploit. Nimda spread by finding email addresses in .html files located in the user's web cache folder and by looking at the user's email contacts as retrieved by the MAPI service. The consequences were heavy: all web related files were appended with Javascript that allowed further propagation of the worm, users' drives were shared without their consent, and "Guest" user accounts with Administrator privileges were created and enabled.
A market research firm estimated that Nimda caused $530 million in damages after only one week of propagation.
Several months later, reports indicated that Nimda was still a threat.

2. ILOVEYOU (also known as VBS/Loveletter or Love Bug Worm)
You may have gotten an email in 2000 with the subject line "ILOVEYOU." If you deleted it, you were safe from one of the most costly worms in computer history. The attachment in that email, a file called LOVE-LETTER-FOR-YOU.TXT.vbs, started a worm that spread like wildfire by accessing email addresses found in users' Outlook contact lists. Unsuspecting recipients, believing the email to be benign, would execute the document only to have most of their files overwritten.
The net result was an estimated $5.5 billion to $8.7 billion in damages. Ten percent of all Internet-connected computers were hit.
Onel A. de Guzman, the creator of the virus and a resident of the Philippines, had all charges dropped against him for creating the worm because there were no laws at the time prohibiting the creation of computer worms. Since then, the government of the Philippines has laid out penalties for cybercrime that include imprisonment for 6 months to 3 years and a fine of at least 100,000 pesos (USD $2000).

1. Morris Worm (also known as the Great Worm)
How big is the Internet, you ask? In 1988, Cornell University student named Robert Tappan Morris launched 99 lines of code in his quest for the answer. While his intentions were not malicious, there were bugs in his code that caused affected hosts to encounter a plethora of stability problems that effectively made these systems unusable. The result was increased load averages on over 6,000 UNIX machines across the country which caused between $10,000,000 and $100,000,000 of damage.

FIREWALLS

As you begin to learn the essentials of computer and network security you will encounter many new terms: encryption, port, Trojan and more. Firewall will be a term that will appear again and again. So, what is a firewall?
A firewall is basically the first line of defense for your network. The basic purpose of a firewall is to keep uninvited guests from browsing your network. A firewall can be a hardware device or a software application and generally is placed at the perimeter of the network to act as the gatekeeper for all incoming and outgoing traffic.

A firewall allows you to establish certain rules to determine what traffic should be allowed in or out of your private network. Depending on the type of firewall implemented you could restrict access to only certain IP addresses or domain names, or you can block certain types of traffic by blocking the TCP/IP ports they use.

There are basically four mechanisms used by firewalls to restrict traffic. One device or application may use more than one of these in conjunction with each other to provide more in-depth protection. The four mechanisms are packet-filtering, circuit-level gateway, proxy server and application gateway.

A packet filter intercepts all traffic to and from the network and evaluates it against the rules you provide. Typically the packet filter can assess the source IP address, source port, destination IP address and destination port. It is these criteria that you can filter on- allowing or disallowing traffic from certain IP addresses or on certain ports.
A circuit-level gateway blocks all incoming traffic to any host but itself. Internally, the client machines run software to allow them to establish a connection with the circuit-level gateway machine. To the outside world it appears that all communication from your internal network is actually originating from the circuit-level gateway.

A proxy server is generally put in place to boost performance of the network, but can act as a sort of firewall as well. Proxy servers also hide your internal addresses as well so that all communications appear to originate from the proxy server itself. A proxy server will cache pages that have been requested. If User A goes to Yahoo.com the proxy server actually sends the request to Yahoo.com and retrieves the web page. If User B then connects to Yahoo.com the proxy server just sends the information it already retrieved for User A so it is returned much faster than having to get it from Yahoo.com again. You can configure a proxy server to block access to certain web sites and filter certain port traffic to protect your internal network.

An application gateway is essentially another sort of proxy server. The internal client first establishes a connection with the application gateway. The application gateway determines if the connection should be allowed or not and then establishes a connection with the destination computer. All communications go through two connections- client to application gateway and application gateway to destination. The application gateway monitors all traffic against its rules before deciding whether or not to forward it. As with the other proxy server types, the application gateway is the only address seen by the outside world so the internal network is protected.

Each of these mechanisms has its drawbacks as well as its advantages. The application gateway is considered to be a more advanced and secure firewall mechanism than the other three, but it uses more resources (memory and processor power) and can be slower. Packet filtering is generally faster and easier to implement, but is susceptible to attack from users faking their source IP address (IP spoofing)or source port to trick your firewall into thinking that the traffic should be allowed through.
To beef up packet filtering security, stateful inspection packet filtering, or stateful packet filtering (SPF) was introduced. Essentially, SPF performs the same as a packet filter, but with a couple of added measures. First, it looks at more details from each packet to determine what is contained within the packet rather than simply who and where it is from (or allegedly from). Second, it monitors communications between the two devices and compares the traffic not only to the rules it has been given, but also to the previous communications. If any communication seems out of context or out of the ordinary based on previous traffic the packet is rejected.

Many home routers come with built-in firewall capabilities. Generally, these tend to be simple packet filters. You can block all incoming connections on all ports if you are not acting as a server for anything. If you want to publish a web page from your computer, you would need to allow incoming traffic on Port 80 to get to your computer. If you want to be able to download files from your computer from outside using FTP, you would need to allow incoming connections on Port 21. A basic rule of security though is to start with the most restrictive and only open holes where it seems necessary.

In addition to the hardware firewall built into routers, there are also software applications called personal firewalls that you can run on your computer. These personal firewall applications monitor all incoming and outgoing communications on your computer as well as what services are trying to interact with what other services.

There are new vulnerabilities and flaws discovered everyday which could allow a hacker to break into your computer, take control of it for use in a denial-of-service attack or steal or destroy your data. Keeping your software patched and running updated antivirus software are very important pieces of the puzzle, but having a firewall block incoming connections in the first place is definitely a wise idea as well. No one security solution will solve everything. The more lines of defense you have in place, the harder it is for hackers to get in and the safer you will be.

Packet Sniffing

Its a cruel irony in information security that many of the features that make using computers easier or more efficient and the tools used to protect and secure the network can also be used to exploit and compromise the same computers and networks. This is the case with packet sniffing.
A packet sniffer, sometimes referred to as a network monitor or network analyzer, can be used legitimately by a network or system administrator to monitor and troubleshoot network traffic. Using the information captured by the packet sniffer an administrator can identify erroneous packets and use the data to pinpoint bottlenecks and help maintain efficient network data transmission.

In its simple form a packet sniffer simply captures all of the packets of data that pass through a given network interface. Typically, the packet sniffer would only capture packets that were intended for the machine in question. However, if placed into promiscuous mode, the packet sniffer is also capable of capturing ALL packets traversing the network regardless of destination.

By placing a packet sniffer on a network in promiscuous mode, a malicious intruder can capture and analyze all of the network traffic. Within a given network, username and password information is generally transmitted in clear text which means that the information would be viewable by analyzing the packets being transmitted.

A packet sniffer can only capture packet information within a given subnet. So, its not possible for a malicious attacker to place a packet sniffer on their home ISP network and capture network traffic from inside your corporate network (although there are ways that exist to more or less "hijack" services running on your internal network to effectively perform packet sniffing from a remote location). In order to do so, the packet sniffer needs to be running on a computer that is inside the corporate network as well. However, if one machine on the internal network becomes compromised through a Trojan or other security breach, the intruder could run a packet sniffer from that machine and use the captured username and password information to compromise other machines on the network.

Detecting rogue packet sniffers on your network is not an easy task. By its very nature the packet sniffer is passive. It simply captures the packets that are traveling to the network interface it is monitoring. That means there is generally no signature or erroneous traffic to look for that would identify a machine running a packet sniffer. There are ways to identify network interfaces on your network that are running in promiscuous mode though and this might be used as a means for locating rogue packet sniffers.

If you are one of the good guys and you need to maintain and monitor a network, I recommend you become familiar with network monitors or packet sniffers such as Ethereal. Learn what types of information can be discerned from the captured data and how you can put it to use to keep your network running smoothly. But, also be aware that users on your network may be running rogue packet sniffers, either experimenting out of curiosity or with malicious intent, and that you should do what you can to make sure this does not happen.

What is Port Scanning?

hat is port scanning? It is similar to a thief going through your neighborhood and checking every door and window on each house to see which ones are open and which ones are locked.
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two of the protocols that make up the TCP/IP protocol suite which is used universally to communicate on the Internet. Each of these has ports 0 through 65535 available so essentially there are more than 65,000 doors to lock.

The first 1024 TCP ports are called the Well-Known Ports and are associated with standard services such as FTP, HTTP, SMTP or DNS. Some of the addresses over 1023 also have commonly associated services, but the majority of these ports are not associated with any service and are available for a program or application to use to communicate on.

Port scanning software, in its most basic state, simply sends out a request to connect to the target computer on each port sequentially and makes a note of which ports responded or seem open to more in-depth probing.

If the port scan is being done with malicious intent, the intruder would generally prefer to go undetected. Network security applications can be configured to alert administrators if they detect connection requests across a broad range of ports from a single host. To get around this the intruder can do the port scan in strobe or stealth mode. Strobing limits the ports to a smaller target set rather than blanket scanning all 65536 ports. Stealth scanning uses techniques such as slowing the scan. By scanning the ports over a much longer period of time you reduce the chance that the target will trigger an alert.

By setting different TCP flags or sending different types of TCP packets the port scan can generate different results or locate open ports in different ways. A SYN scan will tell the port scanner which ports are listening and which are not depending on the type of response generated. A FIN scan will generate a response from closed ports- but ports that are open and listening will not send a response, so the port scanner will be able to determine which ports are open and which are not.

There are a number of different methods to perform the actual port scans as well as tricks to hide the true source of port scan. You can read more about some of these by visiting these web sites: Port Scanning or Network Probes Explained.

It is possible to monitor your network for port scans. The trick, as with most things in information security, is to find the right balance between network performance and network safety. You could monitor for SYN scans by logging any attempt to send a SYN packet to a port that isn't open or listening. However, rather than being alerted every time a single attempt occurs- and possibly being awakened in the middle of the night for an otherwise innocent mistake- you should decide on thresholds to trigger the alert. For instance you might say that if there are more than 10 SYN packet attempts to non-listening ports in a given minute that an alert should be triggered. You could design filters and traps to detect a variety of port scan methods- watching for a spike in FIN packets or just an anomylous number of connection attempts to a variety of ports and / or IP addresses from a single IP source.

To help ensure that your network is protected and secure you may wish to perform your own port scans. A MAJOR caveat here is to ensure you have the approval of all the powers that be before embarking on this project lest you find yourself on the wrong side of the law. To get accurate results it may be best to perform the port scan from a remote location using non-company equipment and a different ISP. Using software such as NMap you can scan a range of IP addresses and ports and find out what an attacker would see if they were to port scan your network. NMap in particular allows you to control almost every aspect of the scan and perform various types of port scans to fit your needs.

Once you find out what ports respond as being open by port scanning your own network you can begin to work on determining whether its actually necessary for those ports to be accessible from outside your network. If they're not necessary you should shut them down or block them. If they are necessary, you can begin to research what sorts of vulnerabilities and exploits your network is open to by having these ports accessible and work to apply the appropriate patches or mitigation to protect your network as much as possible.

The 7 Habits of Highlty Effective Network Professionals

In today’s business environment, the network IS the business. Without email, the Internet, IM VoIP, and dozens of other technologies, effective business communication simply can’t happen. For the people charged with selecting, implementing and maintaining the networks and applications that support business goals, the challenges have never been greater. Highly Effective Network Professionals have adopted the seven habits detailed below in order to build success for their careers and their companies. These seven habits are intended to be a useful guide to building your career. If you adopt them, the rewards can be high. You’ll play a prominent role in your company, working directly with business line and product managers, presenting to executives and the board of directors, and dealing with customers and partners. You’ll be contributing to the success of your company, not just operation a support function with little visibility and few rewards.

1. BE BUSINESS SAVVY

The savvy Network Professional is engaged with people at all levels and departments within the organization. It isn’t enough to simply interact with your fellow IT department members; you need to engage with business line managers, product managers, and executives. By interacting with a more diverse group of co-workers, you’ll get access to the business intelligence that’s as likely to be shared during a coffee break as it is during a formal business meeting. This intimate knowledge of what’s happening in your organization will be invaluable when you’re faced with the all too common scenario of deploying your resources to meet competing goals. By developing an understanding of who the players are- and who they aren’t- you can more easily make IT decisions that will positively impact your company. For example, if you know that sales and marketing organizations are planning to increase e-commerce initiatives, you can invest early in the research, acquisition, purchase, and management of the hardware and software this channel requires. All of your internal customers are looking to IT to help solve their business issues. But how do you decide whether to implement a new collection system or a new contact management system? You’ll likely start by evaluating the current solutions. Are they working as expected? Can small changes be made to improve performance? Are the systems effectively obsolete, making any further investment of questionable value? You also need to evaluate the business environment. Which system is more urgent? Which will have a greater impact on the company’s revenue and profitability? By understanding the business realities on the ground, you can position yourself to make smarter decisions. For instance, what if you knew that the business unit seeking a new contact management system was growing at 200 percent a year, and would be responsible for the company’s next product rollout? This information makes your decision an easier one, but you can’t always count on the intelligence being readily available. Only by playing an active and visible role in you company can you develop the business savvy that will help you succeed.

2. SET EXPECTATION APPROPRIATELY

Everybody’s an expert, right? How many times have business managers come to you and told you exactly what technology solutions the need to solve their business issues? It’s a critical part of you job not to only select the most appropriate solutions, but also to set expectations properly so that your users understand how much the solution will cost, how long it will take to deploy, and exactly what it can and can’t do. Often, it’s Network Professionals who take the hit when a solution doesn’t meet expectations. It’s in your best interests to close the gap between the business side of the house and it. When managers know up front what the can expect, your job is much easier. And when you deliver in line with expectations, you’ll be putting yourself in a better position to meet the expectations of your internal customers and fulfill the requirements of your service level agreement.

3. BE FINANCIALLY PRUDENT

In order to make effective decisions, Network Professionals must understand common financial terms like Return On Investment (ROI) and Total Cost of Ownership (TCO) and be ready to discuss them with business line managers. By understanding both the up front and long-term costs of technology solutions, you’ll be better able to guide you organization in making technology choices that will positively impact the business. Managing your budget involves looking not only at expenditures, but also at expected returns. By working with business line managers to understand how the manage P&L, you become a partner who helps them achieve their business goals as you spend your budget wisely.

4. BE A TECHNOLOGY REALIST.

It’s probably not a stretch to say that you love technology. But as a Network Professional, you also need to be a technology realist. While you may admire the elegance of a new technology solution, you’re realistic enough to know that what matters for your company is how that technology can be applied to solve business problems, improve processes, and increase sales. You have to be prepared to say no to shiny new software if it can’t solve the pain points your company is experiencing. By staying up-to-date on the latest technology as well as on those coming down the road, you can separate the must haves from the want-to-haves. And in doing this, you’ll be looked at as a credible source for technology advice and road-mapping, increasing you strategic value and enhancing you career.

5. BE CREDENTIAL READY PRACTICE PROVEN

You’re working in a global community, full of people with top-notch education and certifications. Employers are selecting candidates from the international talent pool, so you need to be ale to compete. In this environment, certifications really do matter. Be sure to make advantage of employer reimbursement programs for training opportunities, but don’t be afraid to invest in getting yourself certified – you’ll quickly realize the return of this investment on your career. It’s also important to have practical experience and not be afraid to get your hands dirty. Stay on top of emerging technologies, and seize every opportunity to get involved with a new implementation to keep your skills sharp and up-to-date. Network Professionals who understand both the theory and practice of technology will see their achievements reflected in their salary and benefits.

6. BE DIPLOMATIC

In your role as a Network Professional, you’ll find yourself working with a diverse group of people in a wide variety of situations. From IT management to product managers, you’ll need to develop diplomatic skills that will allow you to navigate smoothly through your organization. Keep in mind that you’ll be called upon to explain technology to nontechnical employees and you should learn how to explain pros and cons in language the can relate to.

7. CULTIVATE AN OPTIMISTIC OUTLOOK

The job of a Network Professional is a tough one. You’re forced to more dozens of demands, expectations, and realities from internal customers throughout your organization. You’re the first person they’ll call when something goes wrong, but you may never hear about it when thing right. When you come to work in the morning in a positive frame of now your day will fly by, and you’re more likely to have a fulfilling career.

NESSUS

In computer security, Nessus is a proprietary comprehensive vulnerability scanning software. It is free of charge for personal use in a non-enterprise environment. Its goal is to detect potential vulnerabilities on the tested systems. For example:
Vulnerabilities that allow a remote cracker to control or access sensitive data on a system.
Misconfiguration (e.g. open mail relay, missing patches, etc).
Default passwords, a few common passwords, and blank/absent passwords on some system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary attack.
Denials of service against the TCP/IP stack by using mangled packets
On UNIX (including Mac OS X), it consists of nessusd, the Nessus daemon, which does the scanning, and nessus, the client, which controls scans and presents the vulnerability results to the user. For Windows, Nessus 3 installs as an executable and has a self-contained scanning, reporting and management system.
Nessus is the world's most popular vulnerability scanner, estimated to be used by over 75,000 organizations worldwide. It took first place in the 2000, 2003, and 2006 security tools survey from SecTools.Org.

Operation
In typical operation, Nessus begins by doing a port scan with one of its four internal portscanners (or it can optionally use Amap or Nmap) to determine which ports are open on the target and then tries various exploits on the open ports. The vulnerability tests, available as subscriptions, are written in NASL (Nessus Attack Scripting Language), a scripting language optimized for custom network interaction.
Tenable Network Security produces several dozen new vulnerability checks (called plugins) each week, usually on a daily basis. These checks are available for free to the general public seven days after they are initially published. Nessus users who require support and the latest vulnerability checks should contact Tenable Network Security for a Direct Feed subscription which is not free. Commercial customers are also allowed to access vulnerability checks without the seven-day delay.
Optionally, the results of the scan can be reported in various formats, such as plain text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for reference against future vulnerability scans. On UNIX, scanning can be automated through the use of a command-line client. There exist many different commercial, free and open source tools for both UNIX and Windows to manage individual or distributed Nessus scanners.
If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus's vulnerability tests may try to cause vulnerable services or operating systems to crash. This lets a user test the resistance of a device before putting it in production.
Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it can use Windows credentials to examine patch levels on computers running the Windows operating system, and can perform password auditing using dictionary and brute force methods. Nessus 3 can also audit systems to make sure they have been configured per a specific policy, such as the NSA's guide for hardening Windows servers.

Limitations
While Nessus, through community participation, has a very extension list of known security vulnerabilities, it is not a substitution for anti-virus software. It is only able to detect viruses that open ports and listen.

History
The "Nessus" Project was started by Renaud Deraison in 1998 to provide to the Internet community a free remote security scanner. Nessus is currently rated among the top products of its type throughout the security industry and is endorsed by professional information security organizations such as the SANS Institute.
On October 5, 2005, Tenable Network Security, the company Renaud Deraison co-founded, changed Nessus 3 to a proprietary (closed source) license. The Nessus 3 engine is still free of charge, though Tenable charges $100/month per scanner for the ability to perform configuration audits for PCI, CIS, FDCC and other configuration standards, technical support, SCADA vulnerability audits, the latest network checks and patch audits, the ability to audit anti-virus configurations and the ability for Nessus to perform sensitive data searches to look for credit card, social security number and many other types of corporate data.
As of July 31, 2008, Tenable sent out a revision of the feed license which will allow home users full access to plugin feeds. A professional license is available for commercial use.
The Nessus 2 engine and a minority of the plugins are still GPL. Some developers have forked independent open source projects based on Nessus. Tenable Network Security has still maintained the Nessus 2 engine and has updated it several times since the release of Nessus 3.
Nessus 3 is available for many different UNIX and Windows systems, offers patch auditing of UNIX and Windows hosts without the need for an agent and is 2-5 times faster than Nessus 2.
There is a split-off project called OpenVAS that continues to develop a GPLed vulnerability scanner based on Nessus 2.
On April 9, 2009, Tenable released Nessus 4.0.0.

WIRESHARK

Wireshark is a free packet sniffer computer application. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.

The functionality
Wireshark is very similar to tcpdump, but it has a graphical front-end, and many more information sorting and filtering options. It allows the user to see all traffic being passed over the network (usually an Ethernet network but support is being added for others) by putting the network interface into promiscuous mode.
Wireshark uses the cross-platform GTK+ widget toolkit, and is cross-platform, running on various computer operating systems including Linux, Mac OS X, and Microsoft Windows. Released under the terms of the GNU General Public License, Wireshark is free software.

History
Out of necessity, Gerald Combs (a computer science graduate of the University of Missouri-Kansas City) started writing a program called Ethereal so that he could have a tool to capture and analyze packets; he released the first version around 1998. As of now there are over 500 contributing authors while Gerald continues to maintain the overall code and issues releases of new versions; the entire list of authors is available from Wireshark's web-site.
The name was changed to Wireshark in May, 2006, because creator and lead developer Gerald Combs could not keep using the Ethereal trademark (which was then owned by his old employer, Network Integration Services) when he changed jobs. He still held copyright on most of the source code (and the rest was redistributable under the GNU GPL), so he took the Subversion repository for Ethereal and used it as the basis for the Subversion repository of Wireshark.
Ethereal development has ceased, and an Ethereal security advisory recommended switching to Wireshark.eWEEK Labs named Wireshark one of "The Most Important Open-Source Apps of All Time" as of May 2, 2007.

Features
Wireshark is software that "understands" the structure of different networking protocols. Thus, it is able to display the encapsulation and the fields along with their meanings of different packets specified by different networking protocols. Wireshark uses pcap to capture packets, so it can only capture the packets on the networks supported by pcap.
Data can be captured "from the wire" from a live network connection or read from a file that records the already-captured packets.
Live data can be read from a number of types of network, including Ethernet, IEEE 802.11, PPP, and loopback.
Captured network data can be browsed via a GUI, or via the terminal (command line) version of the utility, tshark.
Captured files can be programmatically edited or converted via command-line switches to the "editcap" program.
Data display can be refined using a display filter.
Plugins can be created for dissecting new protocols.
Wireshark's native network trace file format is the libpcap format supported by libpcap and WinPcap, so it can read capture files from applications such as tcpdump and CA NetMaster that use that format. It can also read captures from other network analyzers, such as snoop, Network General's Sniffer, and Microsoft Network Monitor.

Security
Capturing raw network traffic from an interface requires special privileges on some platforms. For this reason, older versions of Ethereal/Wireshark and tethereal/tshark often ran with superuser privileges. Taking into account the huge number of protocol dissectors, which are called when traffic for their protocol is captured, this can pose a serious security risk given a bug in a dissector. Due to the rather large number of vulnerabilities in the past (of which many have allowed remote code execution) and developers' doubts for better future development, OpenBSD removed Ethereal from its ports tree prior to its 3.6 release.[4]
One possible alternative is to run tcpdump, or the dumpcap utility that comes with Wireshark, with superuser privileges to capture packets into a file, and later analyze these packets by running Wireshark with restricted privileges on the packet capture dump file. On wireless networks, it is possible to use the Aircrack wireless security tools to capture IEEE 802.11 frames and read the resulting dump files with Wireshark.
As of Wireshark 0.99.7, Wireshark and tshark run dumpcap to do traffic capture. On platforms where special privileges are needed to capture traffic, only dumpcap needs to be set up to run with those special privileges - neither Wireshark nor tshark need to run with special privileges, and neither of them should be run with special privileges.

Ports
Wireshark runs on Unix and Unix-like systems, including Linux, Solaris, HP-UX, FreeBSD, NetBSD, OpenBSD and Mac OS X, and on Microsoft Windows.