Wednesday, July 30, 2025

Next-Generation Firewalls

 Understanding the Evolution of Network Security

We begin with a look at the role that firewalls traditionally play in network security, as well as some of the challenges of network security today

Defining the Application and Threat Landscape

Chapter 2 describes several trends affecting application devel opment and their usage in enterprises. You find out about the business benefits, as well as the security risks associated with various applications, and how new threats are exploiting “accessibility features” in Enterprise 2.0 applications.

 Recognizing the Challenges of Legacy Security Infrastructures

. Chapter 3 explains why traditional port-based firewalls and intrusion prevention systems are inadequate for protecting enterprises against new and emerging threats.

Solving the Problem with Next-Generation Firewalls

 Chapter 4 takes a deep dive into the advanced features and capabilities of next-generation firewalls. You learn what a next-generation firewall is, what it isn’t, and how it can benefit your organization.

Deploying Next Generation Firewalls

 Chapter 5 explains the importance of security policies and controls, and the role of next-generation firewalls in imple menting those policies and controls. You also get some help defining specific technical requirements for your organization, and planning the deployment of a next-generation firewall on your network.

 Ten Evaluation Criteria for Next-Generation Firewalls

 Here, in that familiar For Dummies Part of Tens format, we present ten features to look for and criteria to consider when choosing a next-generation firewall


J ust as antivirus software has been a cornerstone of PC security since the early days of the Internet, firewalls have been the cornerstone of network security

Today’s application and threat landscape renders traditional port-based firewalls largely ineffective at protecting corporate networks and sensitive data. Applications are the conduit through which everything flows — a vector for our business and personal lives — along with their associated benefits and risks. Such risks include new and emerging threats, data leak age, and noncompliance.

This chapter explains how traditional firewalls operate, why they cannot meet today’s application and threat challenges, and how data leakage and compliance issues are defining net work security and the need for a better firewall

Why Legacy Firewalls Are No Longer Effective

A firewall, at its most basic level, controls traffic flow between a trusted network (such as a corporate LAN) and an untrusted or public network (such as the Internet). The most commonly deployed firewalls today are port-based (or packet filtering) firewalls, or some variation (such as stateful inspection) of this basic type of firewall. These firewalls are popular because they are relatively simple to operate and maintain, generally inexpensive, have good throughput, and have been the preva lent design for more than two decades.

 In the rapid pace of the Internet Age, nearly two decades means the basic technology behind port-based firewalls is medieval. In fact, network security is often likened to the Dark Ages — a network perimeter is analogous to the walls of a castle, with a firewall controlling access — like a drawbridge. And like a drawbridge that is either up or down, a port-based firewall is limited to just two options for controlling network traffic: allow or block.

 Port-based firewalls (and their variants) use source/destina tion IP addresses and TCP/UDP port information to determine whether or not a packet should be allowed to pass between networks or network segments. The firewall inspects the first few bytes of the TCP header in an IP packet to determine the application protocol — for example, SMTP (port 25), and HTTP (port 80)

 Most firewalls are configured to allow all traffic originating from the trusted network to pass through to the untrusted network, unless it is explicitly blocked by a rule. For example, the Simple Network Management Protocol (SNMP) might be explicitly blocked to prevent certain network information from being inadvertently transmitted to the Internet. This would be accomplished by blocking UDP ports 161 and 162, regardless of the source or destination IP address.

Static port control is relatively easy. Stateful inspection fire walls address dynamic applications that use more than one well-defined port (such as FTP ports 20 and 21). When a com puter or server on the trusted network originates a session

with a computer or server on the untrusted network, a con nection is established. On stateful packet inspection firewalls, a dynamic rule is temporarily created to allow responses or replies from the computer or server on the untrusted network. Otherwise, return traffic needs to be explicitly per mitted, or access rules need to be manually created on the firewall (which usually isn’t practical)

All of this works well as long as everyone plays by the rules. Unfortunately, the rules are more like guidelines and not everyone using the Internet is nice!

 The Internet now accounts for the majority of traffic travers ing enterprise networks. And it’s not just Web surfing. The Internet has spawned a new generation of applications being accessed by network users for both personal and business use. Many of these applications help improve user and busi ness productivity, while other applications consume large amounts of bandwidth, pose needless security risks, and increase business liabilities — for example, data leaks and compliance — both of which are addressed in the follow ing sections. And many of these applications incorporate “accessibility” techniques, such as using nonstandard ports, port-hopping, and tunneling, to evade traditional port-based firewalls.

 IT organizations have tried to compensate for deficiencies in traditional port-based firewalls by surrounding them with proxies, intrusion prevention systems, URL filtering, and other costly and complex devices, all of which are equally ineffec tive in today’s application and threat landscape.

Data Leakage Is a Problem

Large scale, public exposures of sensitive or private data are far too common. Numerous examples of accidental and deliberate data leakage continue to regularly make nightmare headlines, exposing the loss of tens of thousands of credit card numbers by a major retailer, or social security numbers leaking by a government agency, health care organization, or employer. For example, in December 2008, an improperly con figured and prohibited peer-to-peer (P2P) file sharing applica tion exposed a database of 24,000 U.S. Army soldiers’ personal information to the public domain. Unfortunately, such incidents are not isolated: the U.S. Army’s Walter Reed Medical Center, a U.S. Government contractor working on Marine One, and Pfizer Corporation all had earlier high-profile breaches of a similar nature. In all of these cases, sensitive data was leaked via an application that was expressly prohibited by policy but not adequately enforced with technology

Data leakage prevention (DLP) technologies are being touted as a panacea and have captured the attention of many IT organizations. Unfortunately, given the scope, size, and dis tributed nature of most enterprise datasets, just discovering where the data is and who owns it is an insurmountable chal lenge. Adding to this challenge, questions regarding access control, reporting, data classification, data at-rest versus data in-transit, desktop and server agents, and encryption abound. As a result, many DLP initiatives within organizations prog ress slowly and eventually falter.

Many data loss prevention solutions attempt to incorporate too much of the information security function (and even include elements of storage management!) into an already unwieldy offering. Needless to say, this broadened scope adds complex ity, time, and expense — both in hard costs and in staff time. Thus, DLP technologies are often cumbersome, ironically incomplete (focusing mostly on the Web and e-mail), and for many organizations — overkill . . . not to mention expensive!

Furthermore, many of the recent breaches caused by unauthorized and improperly configured P2P file sharing applications wouldn’t have been prevented by the typical implementation of DLP technologies on the market today — because control of applications isn’t addressed.

Some organizations will have to go through the effort of a large-scale DLP implementation — which should include data discovery, classification, and cataloging. But for most organi zations, controlling the applications most often used to leak sensitive data and stopping unauthorized transmission of pri vate or sensitive data, such as credit card and social security numbers, is all that is needed. Exerting that control at trust boundaries (the network perimeter) is ideal — whether the demarcation point is between inside and outside or internal users and internal resources in the datacenter. The firewall sits in the perfect location, seeing all traffic traversing differ ent networks and network segments. Unfortunately, legacy 

port- and protocol-based firewalls can’t do anything about any of this — being ignorant of applications, users, and content. To effectively address data leakage with a firewall solution, organizations should

Gain control over the applications on their network — thus limiting the avenues of data leakage

Scan the applications they do want on their networks, for sensitive or private data

Understand which users are initiating these application transactions and why

Implement appropriate control policies and technology to prevent accidental or intentional data leakage

If enterprises could control the flow of sensitive or private data at the perimeter, many of the data loss incidents that reg ularly make the news could be avoided. Unfortunately, legacy security infrastructures, with traditional firewalls as the cor nerstone, are ill-equipped to provide this functionality 

Compliance Is Not Optional

 With more than 400 regulations worldwide mandating information security and data protection requirements, organizations everywhere are struggling to attain and main tain compliance. Examples of these regulations include HIPAA, FISMA, FINRA, and GLBA in the U.S., and the EU Data Protection Act (DPA) in Europe.

 Ironically, perhaps the most far-reaching, most effective, and best-known compliance requirement today isn’t even a gov ernment regulation. The Payment Card Industry Data Security Standard (PCI DSS) was created by the major payment card brands (American Express, MasterCard, Visa, and others) to protect companies, banks, and consumers from identity theft and fraudulent card use. And as economies rely more and more on payment card transactions, the risks of lost card holder data will only increase, making any effort to protect the data critical — whether compliance-driven or otherwise.

 PCI DSS is applicable to any business that transmits, pro cesses, or stores payment cards (such as credit cards or debit cards), regardless of the number or amount of transactions processed.

Companies that do not comply can be subject to stiff penalties including fines of up to $25,000 per month for minor violations, fines of up to $500,000 for violations that result in actual lost or stolen financial data, and loss of card-processing authorization (making it almost impossible for a business to operate).

While compliance requirements are almost entirely based on information-security best practices, it is important to remember that security and compliance aren’t the same thing. Regardless of whether or not a business is PCI compliant, a data breach can be very costly. According to research con ducted by Forrester, the estimated per record cost of a breach (including fines, cleanup, lost opportunities, and other costs) ranges from $90 (for a low profile, nonregulated company) to $305 (for a high-profile, highly regulated company).

Security and compliance are related, but they are not the same thing!

PCI DSS version 1.2 consists of 12 general requirements and more than 200 specific requirements. Of the 12 general requirements, the following specifically address firewall and firewall-related requirements:

Install and maintain a firewall configura tion to protect cardholder data.

 Use and regularly update anti-virus soft ware or programs.

 Develop and maintain secure systems and applications.

 Restrict access to cardholder data by business need-to-know.

 Track and monitor all access to net work resources and cardholder data.

 To use network segmentation to reduce PCI DSS scope, an entity must isolate systems that store, process, or transmit cardholder data from the rest of the network.

Network security used to be relatively simple — every thing was more or less black and white — either clearly bad or clearly good. Business applications constituted good traffic that should be allowed, while pretty much everything else constituted bad traffic that should be blocked.

 Problems with this approach today include the fact that appli cations have become

 Increasingly “gray” — classifying types of applications as good or bad is not a straightforward exercise.

 Increasingly evasive.

The predominant vector of today’s cybercriminals and threat developers.

 This chapter explores the evolving application and threat landscape, the blurring distinction between user- and business applications, and the strategic nature of many of these applica tions (and their associated risks) for businesses today.


No comments:

Post a Comment

Getting Started with Formulas and Functions EXCEL

 Tapping Into Formula and Function Fundamentals Excel is to computer programs what a Ferrari is to cars: sleek on the outside and a lot o...