Latest News

Network security is a multi-faceted business segment that has mushroomed in size and scope over the last decade. The potential threats to a network and its content,can come from many different areas, adding levels of complexity to an already daunting problem.

Prior to the Internet, securing a satellite network and its content from real-time invasion was relatively easy, however, the interconnection of satellite-based networks with the World Wide Web has opened a Pandora’s Box of problems.
 

Security Policy

For many years satellite networks were standalone and not generally interconnected with the outside world. They benefitted from security by obscurity. That is not the case anymore.

There are a handful of different tools network engineers can use to enhance the security of their networks, but regardless of the tools chosen, an effective security strategy begins with an honest assessment of a network and the development of a security policy. Security policies are useless unless they are rigorously adhered to.

Every piece of equipment in a network should, at minimum, have a log-in screen which requires a user name and password to access the device. Consider the risk posed by equipment which can be manipulated simply by plugging some sort of terminal into it. User name and password protection is of little value if you never change them from their factory defaults. Once the defaults are discovered for a piece of hardware, they are posted on hacker bulletin boards.
 

Firewalls

The following network security tools can be used individually, but they are the most effective when used collaboratively, thereby leveraging their individual benefit.

Firewalls are still the first line of defense against network attacks, and the technology has evolved over the last 15 years through developments by vendors such as Cisco, Juniper, and Check Point. Traditionally, firewalls have been deployed at trust boundaries, or those points where private and public networks meet; however, with the widespread use of wireless 802.11 networks, the number of trust boundaries has proliferated and so have the number of places networks can be attacked.

Originally, firewalls controlled traffic by opening and closing ports designated by the Internet Engineering Task Force (IETF) to correspond to specific applications. Port 80 would control Web browsing, Port 25 would control SMTP traffic, Port 23 would control telnet traffic, and so on. For a while, everyone abided by these rules, but application developers wanted their software to be able to be used by broader classes of users. Some ports, such as 80 and 443, almost always are open, and application developers took advantage of this way in and out of the firewall. Applications evolved to a point where most can now find a way through port-based firewalls. The value of port blocking has declined with time.

Static applications used a single port (80, 25, 23, etc) and could be controlled by simple packet filters. Dynamic applications (e.g., FTP) would go out one port and in another, requiring a more sophisticated control, hence stateful inspection was born. Modern applications ignore ports entirely, so some vendors tack on more types of filters, adding processing delay. (See “Minimizing Latency in Satellite Networks” in the September issue of Via Satellite.) The reality is that enterprise control of modern applications requires something even more sophisticated.

To meet this need, Palo Alto Networks has integrated application ID (App-ID) into its firewall. The company has the ability to classify the characteristics of 870 different applications. App-ID allows further granularity of the decisions which can be made across a network. For instance, the marketing department in an organization might be allowed to run WebEx, but other departments cannot. Application identification is a powerful too, but some applications, such as Skype, use a proprietary form of encryption and hop from port to port, making the application signature resistant. Skype’s mannerisms, however, are identifiable through heuristic analysis, allowing Palo Alto Networks’ firewall to block Skype traffic.
 

AAA

As networks grow in size, their care and feeding becomes increasingly complex. To help manage this task, AAA systems (referred to as Triple A) encompass three different security functions: authentication, authorization and accounting. Authentication validates who you are; authorization validates what you are allowed to do to a network; and accounting records what you have done. Terminal Access Controller Access-Control System (TACACS) and Remote Authentication Dial In Service (Radius) are both security protocols which enjoy significant market share. The Lightweight Directory Access Protocol (LDAP) is an application protocol, specified by the IETF, which also provides hierarchal control of the information technology infrastructure.

As a network grows in size and complexity, an AAA system becomes indispensable to the security of the network.
 

Intrusion Prevention Systems

A denial of service attack is an attempt to overwhelm a computer system, or network, and make it unavailable for use by others. One of the basic plays of such an attack is for the attacker to send in a request to open a port and then wait for a period of time, say 10 seconds, before responding. The attacker floods the system they are attacking with these requests in an attempt to keep it from responding to normal traffic. The computer system under attack assumes that the computer making the requests is simply slow to respond and reserves resources for all of the outstanding requests. The computer being attacked can’t make the distinction between legitimate requests and malicious attacks. Denial of service attacks consume both bandwidth and computing power. Since the bandwidth in most satellite networks is fine tuned, any upset will have a serious impact on network performance.

Network Intrusion Detection Systems (NIDS) look at incoming packets and sift out suspicious looking behavior for inspection. Intrusion detection systems rely on humans to evaluate data which takes time, a precious and valuable commodity when your network is under attack. Intrusion Prevention Systems (IPS) leapfrogged the NIDS approach by automating the decision making process, thereby improving the responsiveness to attacks. TippingPoint pioneered this field five years ago and continues to be the market leader. With TippingPoint’s IPS, all incoming packets are filtered and reviewed for threats. The system can make real time protocol decisions can evaluate sequences of events, and evaluate patterns of data for suspicious behavior. Once recognized, the system can notify a human of suspicious activity or take direct action.
 

Uplogix’ management appliance connects to and protects the console port of remote devices in your IT Infrastructure. Through integration with the AAA system, appliances diagnose and fix network problems securely and automatically, while falling back on encrypted and cached authentication data to maintain security policies during periods of network downtime.

The challenge for many network administrators is ensuring accurate scanning, analysis and routing of traffic — without impacting network performance. Most vendors can only turn on a small number of filters due to the fact that it introduces to much degradation of performance (processing delay). TippingPoint places a high degree of trust in the accuracy of their filters, ensuring that traffic is identified and routed appropriately. To keep the network running smoothly, TippingPoint has optimized its inspection engine such that it can turn on more of these filters without degrading performance. Add to that, the company has recommended filter settings from the DV Labs team that suggest which filters should be turned on — giving its customers maximum protection with no impact on network performance.

Keep in mind that the more filters you run, the slower the overall process. TippingPoint places a high degree of trust in the accuracy of their filters rather than sheer number of filters. Combining the smaller number of filters with TippingPoint’s proprietary high speed data engine minimizes processing delay.

 

Secure Remote Management

IP-based computing and communications equipment have integrated management interfaces so they can be accessed and controlled over a LAN or WAN. One of the challenges to using SNMP (simple network management protocol) to manage an IP-based network is that you must rely on the network to manage your network. If the network is down, you have, in essence, lost your eyes and ears and have no way of communicating with the information technology infrastructure at the remote location.

The biggest potential security threat to an IP-based network is during a period of network downtime — either part of the network or all of it. The reason: When remote devices lose their connection with the AAA system, they fallback to the default user name, and sometimes, the default password. Therefore, when the network goes down, network devices are much more vulnerable to attack. The addresses of the management interfaces mentioned above can easily be discovered by doing a trace route, creating unnecessary risks. Automated remote management systems, such as the one made by Uplogix, use a secure, secondary channel to manage network devices in lieu of the management interfaces. Uplogix’s system uses a premise-based approach, with a smart appliance that connects directly to the console ports of the different pieces of gear, such as satellite modems, routers, switches, firewalls, access points, etc. The devices are then managed via the appliance’s automated responses to managed device states or SSH connections from a central server, which can interface with the AAA system. In the event of a network outage, the remote appliance can connect back to the central site via an out-of-band connection.

Since the appliance connects to the console ports of the gear at the remote location via a cable, it never loses its connection with the devices, even during network downtime. The Uplogix appliance caches the last known AAA configuration and enforces an organization’s security policy even when the network is down. Automated remote management systems also allow for a high level of granularity when it comes to authorization. For example, a group which maintains PBXs is always at odds with the group which maintains the network interfaces. Granular authorization allows individuals or groups read-only privileges into one layer of the next group’s domain. Therefore, the PBX group can see the status of the PRIs to see if everything is in order with the network before a trouble ticket is generated.
 

Encryption

Encryption safeguards the content on a network by enciphering it by means of an algorithm. This process changes up the order of the bits and makes the content no longer readable. The encrypted content is then decrypted using the same algorithm. The secret to both operations is a secret encryption key which both the sender and receiver use. There are two broad encryption strategies: encrypting the channel the content is travelling across and encrypting the data (e.g., individual files containing data). Both have advantages and disadvantages.

When an entire link is encrypted, regular data goes into and out of the encrypted pipe. It is a common technique to protect government and military networks and provide a high level of security, but there are several potential negatives. The entire circuits, from end-to-end, must be encrypted. Should a portion of the circuit not be encrypted, the unprotected data is vulnerable to being copied at this point. Network engineers must therefore understand the exact data path of every encrypted circuit. Once an encrypted channel is set up, it is simple to operate and maintain. Be aware that encryption adds latency and overhead. The military routinely encrypts satellite channels, for obvious reasons, while commercial entities tend to shy away from this approach due to the added costs and complexity.

Another option is to encrypt individual files, transport them to their destination, and then decrypt them. The AES (Advanced Encryption Standard) 256 bit is now the gold standard for encryption. The software key which encrypts and encrypts the data files is unique and must be safeguarded. Encrypting individual files has many advantages. Encrypted files can be safely sent over any network with the assurance they will not be viewed by unauthorized personnel. It is also an effective measure against what is known as a “man in the middle attacks”, ergo, even if there was an undiscovered breach of an encrypted channel, the encryption of the file would still be safeguarded. State secrets are encrypted and then sent over an encrypted channel, resulting in double encryption. Since this type of encryption simple reorders the bit pattern, it does not increase file size or add any overhead.

Mathematical formulas are at the heart of encryption and whatever can be done mathematically can ultimately be undone; hence, new algorithms using larger software keys are developed as the computing power promised by Moore’s Law make it easier to break older, smaller keys. So although encrypting content is the most secure form of protecting content, time does not stand still and the form of encryption you use today will likely be become outdated in the future when newer and stronger methods of encryption would be available.
 

Hardened Semiconductors

DTH providers rely on conditional access systems to prevent unauthorized viewing, thereby protecting the content on the network. Cryptographic keys and algorithms are embedded into security modules and set-top boxes, which decrypt the protected content broadcast over satellite. As mentioned previously, it is imperative that the subscriber keys be kept secret, or the system can be compromised. In the case of algorithms and keys used in DTH networks, they are embedded in the security modules and set-top boxes and are the target of reverse engineering. A cottage industry has developed in Canada which helps fuel the piracy of DTH systems around the globe. Roughly 2 million Canadians watch American television, even though they cannot legally subscribe to American DTH services because of language requirements and other rules. The solution: Buy an illegal receiver to defeat the DTH provider’s conditional access system. Since their numbers are in the millions, there is a lot of money at stake; enough to fund covert labs which dissect the inner workings of microprocessor chips to learn how they work, ultimately unraveling the algorithms and cryptographic keys hidden inside. DTH receivers with the ability to load illegal codes to get around the conditional access system are then mass produced and sold around the world. Unnamed industry sources confided that theft of DTH services in the United States ranges between 10 percent and 20 percent but can be as high as 90 percent in developing countries.

Pirates use a wide range of strategies to try to extract the innermost secrets from chips. One technique, called differential power analysis, involves measuring a chip’s electrical consumption then using statistical techniques to solve for secret keys. Another attack, called fault induction or glitching, involves analyzing the mistakes made when a chip is pushed beyond its normal operating bounds. Chips can be ground down layer by layer and imaged under a microscope to identify and analyze logic structures. Cryptography Research helps companies safeguard their DTH services by adding dedicated tamper-resistant circuitry to chips used in set-top boxes and in conditional access modules. Using proprietary techniques, the company uses mathematical techniques and advanced semiconductor engineering to make it harder to pull information out of a chip, thereby making it harder to copy.

Get the latest Via Satellite news!

Subscribe Now