Published

Cyber War: Are Your Defenses Sufficient?

The precipitating event in the 1986 Tom Clancy novel Red Storm Rising is a massive fire in a Soviet petroleum installation caused by an infiltrator in the control room.

Share

The biggest malware news in recent months was the Stuxnet worm. There are reasons to believe that it was designed to attack certain very specific pieces of industrial equipment — specifically uranium enrichment centrifuges in Iran.

Could such an attack happen here? Increased security since 9/11 has helped to reduce, but not eliminate, that likelihood, but the danger of a cyber attack seems to be increasing with time. In the industrial world the main targets would be industrial control systems (ICSs), SCADA systems, distributed control systems (DCSs), and programmable logic controllers (PLCs). This article will take a look at the present state of affairs, give the opinions of several experts, and offer some advice on how a company can protect itself.


What is malware?

Malware can be defined as software designed to cause harm to a computer system or to cause it to perform undesirable activities.

There are several recognized types of malware. Probably the oldest is the virus, usually defined as a program capable of reproducing itself and usually capable of causing harm to files or other programs on the same computer. Viruses are often attached to other, innocuous programs.

A Trojan (short for Trojan horse) is a piece of software that appears to be safe but contains harmful software, like a virus or a worm. A worm is a piece of malware that self-replicates and sends copies of itself to other computers via a network. Unlike a virus, it does not need to attach itself to another program.

A rootkit is “software that enables continued privileged access to a computer while actively hiding its presence from administrators by subverting standard operating system functionality or other applications,” according to Wikipedia. Industrial Defender1 defines a control system rootkit as “a piece of software that modifies the behavior of a control system and disguises itself from detection.”

Spyware is software that captures information from an infected computer; it may also take control of the computer to install other software, change default settings, or redirect a Web browser. A keylogger is one type of spyware. Adware is software that causes a computer to display advertisements; some adware contains other malware such as spyware.

One popular form of attack, the denial of service attack or distributed denial of service attack, while common, is of less concern to industrial users because it targets websites, rather than control systems.


Who are the attackers?

Many people seem to have the impression that most malware attacks have been launched either by hackers out for bragging rights or by criminals looking to steal identities, extort money or build networks of zombie computers. But they are also launched by foreign governments, terrorists and independent political actors, e.g., Wikileaks and its defenders. The purposes include espionage, either military of industrial; the pursuit of directly salable information for fraud or identity theft; to cause embarrassment; or as preparation for other attacks.

Malware attacks can also be used to sabotage military installations or civilian infrastructure, either for political reasons or for extortion.

Testifying in April of 2007 before the House Committee on Homeland Security Subcommittee on Emerging Threats, Cyber Security, and Science and Technology, O. Sami Saydjari, president of the nonprofit Professionals for Cyber Defense, pointed out that “cyber warfare will be economic and social warfare.” Saydjari was later quoted in a Sept. 26, 2007 article by Jean Meserve on CNN: “For about $5 million and between three to five years of preparation, an organization, whether it be transnational terrorist groups or nation states, could mount a strategic attack against the United States."


Consequences of a successful attack

Espionage can lead to loss of trade secrets — business plans, customer lists, financial information or intellectual property such as product design information or proprietary technology. A good example was the 2009 Operation Aurora incident, in which an attack traced back to the Chinese government stole intellectual property from Google and managed to gain information on Chinese dissidents.

Sabotage can cause disruption of operations, loss of production, destruction of capital equipment or damage to the community through power outages, failure of water systems, fire, explosion, flood or the release of toxic/radioactive materials. At a minimum such events will cause economic harm; at a maximum they can cause mass casualties. By all reports, one of the Stuxnet worm’s main functions was to sabotage the centrifuges used for uranium enrichment in Iran’s nuclear program. The CNN article by Jean Meserve mentioned above discussed the results of a Department of Homeland Security/Idaho National Laboratory experiment dubbed the Aurora Generator Test in which hacking into an electric generator’s control system caused it to go out of control and self-destruct.


How malware gets in

Malware can enter a system through any interface it has with the outside world — even if that interface is not known to users or system administrators. Infection can come through the Internet or via removable media (the first viruses spread via floppy discs, and the Stuxnet worm is believed to have been propagated by flash memory sticks). But it can also come through other entry points like unauthorized modems or wireless portals that users install for their own convenience.


What about wireless?

Wireless connectivity is rapidly gaining in popularity. Wireless system makers insist their encryption protocols provide good security, but wireless systems have one major potential threat entry point. “While a padlocked fence with razor wire may prevent someone from tapping into your fiber optic or copper cable,” points out Chris Shipp, CISSP, CISM, Cyber Security Manager, Contractor, DynMcDermott, U.S. Dept. of Energy Strategic Petroleum Reserve, “it almost certainly will have no affect on someone attempting to hack into a wireless device because they do not need to have physical access to be successful.”

So how safe are wireless networks, really? The answer to that, says Bradford H. Hegrat, CISSP, CISM, Sr. Principal Security Consultant, Rockwell Automation, depends on a number of variables. “[Y]ou have to consider the intended use of the network,” he says, “and whether or not it will be used for control, alarming and events, reporting or just higher level information sharing. Each one of these items has a different weight on data confidentiality, integrity and availability.” The nature of the network is also important, he continues: Does it provide readings from some otherwise inaccessible gauges, or is it part of a plant-wide control network? Before looking at security, he suggests, one should check availability. It’s best, he explains, to design the control system so it can continue to operate even if the wireless network goes down. “Once the designers get beyond that,” he says, “we can start to talk about protocols, encryption, access methodologies, site surveys, etc.” And while some of the earlier wireless systems had weak security, adds Shipp, the newer ones seem much more robust.


Stuxnet

W32.Stuxnet, which has been variously called a worm, Trojan or virus, represents a step change in malware. For one thing, it’s enormous, at about half a megabyte, and its creation clearly required a very large investment of time and manpower, including intimate knowledge of the targeted equipment and extensive quality control testing. More importantly, while not the first piece of malware that targets industrial equipment, it is the first worm discovered to include a programmable logic controller (PLC) rootkit. It gets into Windows-based systems via at least four previously unknown (zero-day) vulnerabilities, which is unusual for malware (the vulnerabilities have since been patched by Microsoft). It is both very sophisticated and very specific, targeting control systems that use Windows and Siemens WinCC/PCS 7 SCADA software for PLCs, and, while it burrows deeply into both the Windows machine running the overall control system and the PLCs (and if it has the opportunity reports what it finds back to, and receives updates from, its command and control servers — or did, until they were taken down), it mounts an actual attack only on systems that use two particular brands of variable-frequency motor drives (one made in Iran and the other in Finland) that are running within a particular frequency range. That’s an extremely specific attack. Symantec reports the vast majority of infected systems are in Iran.

Stuxnet most likely gets into a system initially via an infected memory stick, but can then spread to other computers networked with that one, although patches to prevent that are available. In addition, major anti-spy software companies have included Stuxnet signatures in their databases, which means that Stuxnet is probably no longer a threat to companies that update their protection signatures and are diligent in running malware sweeps.

But while Stuxnet may be defanged, the principles on which it operates, and the technology embedded in it, signal a new age for malware.

An attack does not need to be as sophisticated as Stuxnet to gain entry to unprotected control systems. The Nessus Project has an extensive list of plug-ins to counter many known vulnerabilities in PLCs and other control equipment. In addition, “As of Oct. 1, 2010, the national vulnerability database has 51 [ICS] vulnerabilities currently listed, and organizations like Critical Intelligence are tracking 119 disclosed vulnerabilities,” said Michael J. Assante, President and CEO of the National Board of Information Security Examiners, in testimony at a hearing entitled “Securing Critical Infrastructure in the Age of Stuxnet” before the Senate Committee on Homeland Security & Governmental Affairs.

Referring to Stuxnet as what it is — a weapon — Assante went on to say that “we must understand that the attacks we should be most concerned with are not designed to disable their digital targets, but to manipulate them in an unintended way to achieve a desired physical outcome. Many professionals have limited their thinking to dealing with the loss of individual elements or capabilities of their control systems and have failed to fully embrace the implications of calculated misuse.”

And we are vulnerable, Assante went on: “[W]e have not sufficiently studied nor considered the potential for these types of attacks on large interconnected systems, such as the electric grids, or in highly controlled and potentially dangerous industrial processes.” And Shipp sees in Stuxnet “confirmation that recently there has been and will continue to be increasingly sophisticated and targeted attacks against critical infrastructure components.”

How likely is such an attack on us? “It is inevitable,” Shipp continues, “that an attack using something very similar to Stuxnet will be launched against the West. The question is not if, but when such an attack will occur.”

Hegrat points out that Stuxnet was a point-targeted tactical digital weapon system whose effectiveness depended on it specificity. “To have a widespread, generalized attack focused on the West,” he says, “one would have to change the goals of such an attack from point target disruption/sabotage/destruction, to area, industry or theater-level disruption of service/operational effectiveness,” which would diminish its effectiveness. That said, he goes on, it would not be difficult to create a denial of service level attack that would, for example, disable (brick) all Siemens PLCs.


What Stuxnet represents

Stuxnet, like Google’s Operation Aurora, says Hegrat, is an example of an Advanced Persistent Threat (APT). “The real danger with APTs is their resource pool and determination,” he says. “This determination can include physical compromise like Stuxnet through USB (essentially, Stuxnet compromised human beings and used human beings as an attack vector). According to InformationWeek.com, 1 in 4 malware packages spread via USB.” In the future, he continues, “[w]e will continue to see this attack vector growing; couple that with an insider APT and this will be extremely damaging.”


What companies can do to protect themselves

With all the threats out there, and all the ways malware can get in, how can a company protect itself? The first step is making sure all potential entry points are known and secured. The use of removable media like memory sticks, CDs and DVDs should be banned — and the ban strictly enforced, with a written policy so that violators can be dealt with. Anyone who wants to play computer games can do it at home.

If a plant has both a control network and an office network there should, ideally, be an air gap (no connection at all) between them. This can seldom be done (management tends to want to see what’s happening in the plant), so whatever connection there is must be heavily protected with firewalls and other means.

Another step seems obvious, but is often neglected: Make sure all protective software (virus blockers, etc.) is kept up to date. For a home or office PC this can be a simple matter, since most protective software vendors offer automatic updating of their databases, but in a plant it’s not so easy. For one thing, these updates tend to come via the Internet, and all Internet connections should be kept locked down, so the updates may have to be done manually. And even if there’s a well-thought-out way to do the update, there’s always the worry that making any change may cause something to quit working, so software updates may well be deferred, and even vital security patches from Microsoft may not get installed when they should.

One technique that seems to be gaining followers is the use of whitelisting, which involved establishing a list of applications and software vendors considered trustworthy; only applications on the whitelist, or software from approved vendors, can be run. Also useful is host-based intrusion prevention (HIPS) technology, which watches for suspicious activity within a particular host computer. Other intrusion prevention systems include network-based intrusion prevention (NIPS), wireless intrusion prevention systems (WIPS) and network behavior analysis (NBA).

Industrial Defender also recommends strict firewall egress filtering. Stuxnet and some other malware are designed to communicate with specific command & control (C&C) servers via the Internet, both to upload information they have stolen and to download instructions from their masters (similar to the automatic updating of an antivirus program). If the malware cannot connect to its C&C server, it cannot change its function.

While all these precautions are good, Hegrat insists they are, and always have been, insufficient. “It's been my stance for my entire career,” he says, “that you must always assume that you are compromised.” Many current security standards, he continues, are written from that point of view. This position, he continues, is mirrored by the National Security Agency's Director of the IAD (Information Assurance Directorate) with respect to U.S. Department of Defense networks. This stance, he adds, is assumed in many security standards, such as ISA99 and NIST 800-82, as well as in the Rockwell Automation/Cisco Systems Converged Plantwide Ethernet (CPwE) Design and Implementation Guide.

The answer to this can be found in the motto, “The proof is in the packet,” Hegrat concludes. “In this mind set of hyper vigilance,” he says, “the only way you can ensure hostile entities are not communicating on your control systems networks is to do that detailed packet analysis. If you control the variables you can whitelist the traffic. Essentially, controlling the variables is the key to a more secure system. After all, if the [National Security Agency] says ‘There's no such thing as secure anymore,’ I'd have to say that security has always been a myth and that it never existed in the first place.”

Shipp also recommends the 800-series guidelines published by NIST.


Getting expert help

While many of the procedures and recommendations mentioned seem fairly straightforward, some are fairly complex and even the simple ones can add up to a significant amount of work. And misguided attempts to patch security holes can cause systems to malfunction. It takes intimate knowledge of the systems to be protected and considerable expertise to make sure you’re protected without causing plant problems. Assigning a dedicated staff person to the role of chief security guru (and making sure that person receives the training and certification needed) is an excellent plan, but it’s also a good idea to hire an expert, at least to get started.

But how do you evaluate the skill level of an “expert”? Start by listening to what the candidate talks about, suggests Shipp. “[E]ffective cyber security is a business requirement and it must be approached first from a business perspective not a technical one,” he says. “The most effective security consultants will speak in terms of business risk and mitigation of risk.” The consultant should talk about defense in depth; he or she should have experience on your type of equipment, “and preferably industry-recognized certifications from ISACA, (ISC)2 or SANS.” The last named, he continues, should be enough if you are simply looking for a technically-skilled consultant to perform a penetration test. And before you share any company information with a prospective consultant, or allow access to your systems, do a thorough background check.

A background and reference check may be difficult, however, says Hegrat, because most such people and organizations require a non-disclosure agreement, and may also need legal indemnification before performing penetration and social engineering tests. A consensus is developing, he continues, that the best course is an interview with detailed questions on the interviewee’s scope of services and deliverables. That can be followed up, he goes on, with a trick from the penetration tester’s toolbox: “[D]o some Internet-based research not only on the consultancy, but the consultant themselves. The ICS Security field is dangerously small — we are all pretty easy to find on the Web.”

1The Stuxnet Worm and Options for Remediation, by Andrew Ginter, Chief Security Officer, Industrial Defender. Last updated: Aug. 23, 2010


Organizations and companies with useful information

  • American Chemistry Council Chemical Sector Cyber Security Program
  • ASIS International has a variety of reports and other useful information.
  • British Columbia Institute of Technology on cyber security
  • Sandia National Laboratories’ Center for SCADA Security
  • CGI Security has up-to-date news on cyber security matters.
  • The Department of Homeland Security’s National Cyber Security Division explains some of the many things DHS is doing in this field. Also see the DHS US-CERT Control Systems Security Program. Click here to request additional information by email.
  • Idaho National Lab maintains a site on control system security.
  • Industrial Defender has a great deal of material on Stuxnet and other threats to industrial control systems.
  • ISACA, the Information Systems Audit and Control Association, engages in the development, adoption and use of knowledge and practices for information systems, and provides certification for information security professionals.
  • (ISC)², The International Information Systems Security Certification Consortium, Inc., provides education and certification for information security practitioners.
  • The Institute for Information Infrastructure Protection (I3P) at Dartmouth University is a consortium of leading universities, national laboratories and nonprofit institutions dedicated to strengthening the cyber infrastructure of the United States.
  • The Nessus project provides network vulnerability scanning as a service or as a purchased product.
  • The New York State Office of Cyber Security provides news and a good assortment of other resources.
  • NIST has a number of useful materials related to cyber security, starting with Special Publications in the 800 series, which are documents of general interest to the computer security community. In addition, NIST maintains the National Vulnerability Database.
  • Rockwell Automation: www.rockwellautomation.com/security and www.ab.com/networks/architectures2.html. For literature click here
  • Sandia National Laboratories on Cyber Security
  • The SANS (SysAdmin, Audit, Network, Security) Institute offers information security training and security certification.
  • The Symantec Critical Infrastructure Study focused on six key critical infrastructure segments: Energy, Banking & Finance, Communications, IT, Healthcare, and Emergency Services. The goal of the study, according to Symantec, was to find out how aware critical infrastructure companies were of government efforts in this area and how engaged and enthusiastic private enterprise was about working with government. A document giving Global Results is available for download, as is a slide show giving highlights. Symantec’s analysis of Stuxnet gives details on Stuxnet and how it works.
  • Siemens Automation’s page on Stuxnet contains up-to-date information in cyber security, including Siemens’ efforts with respect to Stuxnet.

Note: Mention of particular commercial entities is provided as a service to our readers, and in no way constitutes a commercial endorsement of same.

RELATED CONTENT

  • What’s Your Temperature?

    For decades, valve manufacturers have provided the maximum recommended working pressures and temperatures for their products, based on the materials used in the pressure-containing parts.

  • Beauty or a Beast? Using NDE on Valve Components

    When it comes to valves, “beauty is only skin-deep” is often a true statement. Since Superman and his X-ray eyes don’t really exist, there is no way to verify the quality of a valve or valve component just by looking at it.

  • Piping Codes and Valve Standards

    As with every intended use for valves, piping carries its own set of standards that valve companies and users need to understand.