Experts probe Net's natural defenses

by Alan Boyle

The Internet’s organic structure explains why it’s so resistant to random failures, but researchers now say those same features make it vulnerable to cyberattacks. The findings could help security experts strengthen weak links in the Net’s chain.p. THE LATEST STUDY, published in Thursday’s issue of the journal Nature, builds on earlier studies of the researchers, Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi.p. They found that samples of the World Wide Web didn’t have a random structure: Instead, the connections exhibited a hierarchy similar to that found in naturally occurring networks such as trees and living cells, with a small proportion of highly connected nodes branching off to a large number of less connected nodes. The structure was the same at different scales, meaning that the results could be extended to the Web as a whole, they said.p. This “scale-free” pattern is reflected in the structure of the Internet — that is, the global network of routers and lines knitting computers together —as well as in the connections between Web pages sitting on those computers.p. In the new study, the trio focused on the implications for the Internet’s survivability in the face of failure. Turning again to their “small-world” samples, they found thatthe Internet, the Web and other scale-free networks could stand up to even unrealistically high rates of random failures.p. “Even when as many as 5 percent of the nodes fail, the communication network between the remaining nodes in the network is unaffected,” they reported.p. That’s because in most cases, random failure ‘say, the breakdown of an Internet data router —will affect nodes with little connectivity. The flow of Net traffic can simply take another, no less convenient route. In contrast, the performance of a randomly connected network, also known as an exponential network, degrades steadily as failures increase.p. This explains why the Internet chugs right along even though pieces of the network frequently break down.p. "The system evolved into this stage in which it’s completely tolerant against errors, and it’s not only because of redundancy," Barabasi said in a telephone interview from Romania, where he’s on sabbatical. “It’s much more than redundancy.”p. But the researchers say there’s a flip side: Although the structure is particularly well-suited to tolerate random errors, it’s also particularly vulnerable to deliberate attack, they said. If just 1 percent of the most highly connected Internet routers or Web sites are incapacitated, the network’s average performance would be cut in half, said Yuhai Tu of IBM’s T.J. Watson Research Center.p. “With only 4 percent of its most important nodes destroyed, the Internet loses its integrity, becoming fragmented into small disconnected domains,” he wrote in a commentary published in Nature.p. This vulnerability represented the “Achilles’ heel” of the Internet, Tu said. An attack on the key Internet access points would be far more serious than an attack on the biggest Web sites, Barabasi said. “Then there’s no Internet, there’s no e-mail, there’s no Web, because you can’t get from A to B,” he said.p. Internet security experts who reviewed the research said it meshed with their own real-world experience, although they cautioned that other factors helped guard against cyberattacks.p. The Internet traffic network has some intersections so key that “if you take down this limited number of points, you could take down a great deal of connectivity,” said Jim Jones, director for technical operations for response services at Global Integrity Corp., a Virginia-based computer security firm.p. However, Jones noted that the Notre Dame study was based on a “The Internet is not static, and the snapshot they took of it may not match the backup connections that may kick in. … There are redundancies that wouldn’t necessarily show up,” he said.p. A potential attacker would have to have a detailed knowledge of how the Net works, how to take down key Internet locations that are heavily protected precisely because they’re so essential, and how to cut off the backup avenues.p. Another expert took issue with the Notre Dame researchers’ claim that the Internet was inherently vulnerable to attack. “Actually, these problems aren’t inherent at all … and we can eliminate them,” said David A. Fisher, simulation at the CERT Coordination Center in Pittsburgh. The center is among the nation’s pre-eminent As the Internet becomes more distributed and less hierarchical, the number of potential targets should decline, he said. “There are only a few of noticeable, but it’s not because of a weakness,” he said.p. Jones said the research validates what’s known about Internet security a little more rigorously, “and certainly gives us some better directions” on how to address the Net’s natural vulnerabilities.p. “The structure of the Internet is a product more of economics than design,” he noted. To cite an exaggerated example, it’s cheaper to link less connected nodes to a bigger-bandwidth backbone than to install fiber-optic lines and heavy-duty routers in every household.p. “If we were to design (the Internet) as an exponential network, we would have decent survivability to random error and incredible survivability to directed attack … but it would be incomprehensible to do that because of the cost,” he said.p. Nevertheless, making the Internet more uniformly connected might be a goal to shoot for, he said. To cite a real-world example, Jones compared the current Internet to the Napster file-swapping site, while the more dispersed Gnutella service has more of “With Gnutella, I have no one point to focus on,” Jones said.p. Mark Rasch, a former federal prosecutor who is now Global Integrity’s vice president for cyber law, said an exponential approach would also make it “much more difficult to regulate Internet content, because it grows organically.”p. Rasch said another way to beef up the Internet’s survivability in the face of attack would be to “build off a second Internet,” such as the high-bandwidth Internet2 infrastructure currently in development.p. But Fisher said redundancy alone would not protect the Internet from deliberate attack.p. “We’re concerned not so much with failures in the traditional sense, but with intelligent adversaries who are trying to cause failures,” he said.p. "It turns out that redudnacy doesn’t help at all. What helps is redundancy coupled with diversity.p. If network administrators use the same setup for all their primary and backup systems, that simply leads to a situation in which “one attack fits everybody,” he said. In the long run, Fisher said, the best way to fend off mass attacks against any network is to eliminate the single points of failure, wherever they exist.p. p. p. p. p. Wednesday, July 26, 2000

TopicID: 303