New Times,
New Thinking.

Advertorial feature by Cyber
  1. Spotlight on Policy
  2. Tech and Regulation
  3. Cybersecurity
9 March 2016updated 09 Sep 2021 1:30pm

Where have all the white hat hackers gone?

Can hacking be good for you? Cris Thomas, strategist at Tenable Network Security, thinks it might

By Cris Thomas

With The Hateful Eight and Forsaken lighting up the silver screen, the “western” seems to be experiencing a resurgence as a genre. However, this time around, modern westerns have a dirtier, grittier, more realistic feel than the campy and clean movies of the 1930s, such as Montana Moon, Tumbling Tumbleweeds, Stagecoach and Dodge City. One theory is that the characters of these earlier films were fairly flat: they were either good or bad, right or wrong, with very little ambiguity. And just in case there was any confusion, the audience was given clues as to the hero and the villain by the colour of their hat. Bad guys wore black hats; good guys wore white hats. Easy.

At some point in the 1980s, the white hat/black hat trope of the American western became associated with the fledgling hacker community. “Good” hackers, who identify a vulnerability and tell the company so they can fix it, became known as “white hats”. “Bad” hackers such as the authors of a virus designed to steal banking information became “black hats”.

Here’s the rub. There is no “good” or “bad” to hacking, there is just hacking. Still, the term “hacker” has been used so often in news media and pop culture as a stand-in for “someone who does bad things with computers” that, to most people, “hacking” is synonymous with breaking the law.

Who wears a hat these days?

 People who are technologically adept, those who are skilled at solving complex computer problems, those who understand how computers and the networks that connect them work – these people collectively are hackers. It’s what they do with these skills that determines on which side of the fence they sit.

Lest they be confused with criminals, the “good” hackers wanted to distinguish themselves from their amoral counterparts. This led many to adopt additional monikers such as “ethical” or “white hat” to draw a distinction between them and people who might use similar skills for criminal activities.

The white hat hacker was heralded as a champion for justice, using his or her skills to fight the black hats and save the world from cyber Armageddon. They ride in on their keyboards and network cables to save the “family” server farm from the black-hat-wearing landlord.

However, mud sticks, and as the hacker community transforms into the $170bn global cyber security industry projected for it by 2020, increasing numbers of people are dropping “hacker” from their identity altogether. Those who once might have called themselves white hat hackers now go by corporate-sounding titles such as penetration tester, security researcher, malware reverse engineer, or forensic data analyst.

Incidentally, it’s the same thing on the other side of the fence. Despite one of the largest annual security conferences in the world calling itself Black Hat, even black hat hackers are seldom identified that way today. Instead, they are labelled as cyber criminals, malware authors, hacktivists or nation state actors.

It’s sad but no one is proud to be called a hacker any longer.

Reward or persecution?

Today, security researchers – who arguably are the most direct heirs to the white hat legacy – often find themselves persecuted by legal threats for trying to do the right thing. The overly broad and vague laws such as the Computer Fraud and Abuse Act in the United States and the Computer Misuse Act in the UK, as well as the intimidation tactics used by some companies, have convinced many to hang up their “hats”.

Instead of responding positively to someone who points out a flaw in their product, many companies all too often fall back to a defensive posture and use legal threats and intimidation or simple delay tactics to keep the information about a potential vulnerability from being made public. In fact, so many researchers have found that the risk is just too high that they have stopped doing security research altogether.

 A case in point is that of Cisco, which in 2005 took out a court injunction and threatened to sue the security researcher Mike Lynn to prevent him from revealing information about a vulnerability discovered in one of its routers. More recently, in 2015, FireEye obtained a court injunction to stop researchers for the German firm ERNW from disclosing “too much” information about vulnerabilities discovered in one of its security products.

Perhaps this is why some researchers choose to sell their discoveries to the highest bidder, instead of disclosing them to the manufacturer, ignoring the probability that they may be used by nation states as offensive weapons in a potential cyber war.

However, there is some light on the horizon. The introduction of “bug bounty” programmes is arguably a positive step forward. These programmes are designed to encourage researchers to spend their time looking for flaws, report them in a responsible manner, and be compensated for their time. The relationship is mutually beneficial, because the vendor ultimately gets a more secure product, at a lower cost of development, without the risk of a public relations nightmare, should a severe vulnerability be discovered and publicised before a patch is available. And everyone benefits from continuously improved, secure programmes.

Unfortunately, the percentage of companies that participate in such schemes is exceedingly low and often there can be ambiguity.

For example, at the start of 2016, General Motors announced its bug bounty programme, which is hosted by Hacker One, but it “forgot” about the bounty element. Instead, it laid out to researchers the provisos that would prevent legal action being taken should a vulnerability be discovered – a novel approach, some might say.

We need hackers, now more than ever. The challenge we face is that technology isn’t standing still. For a start, we’re on the cusp of a brave new world with the coming Internet of Things, where everything is connected to the internet.

With the advent of the “connected home”, everything now comes with a Bluetooth stack to send data directly to the cloud. From mundane objects, such as televisions, frying pans and speakers, to the less mundane thermostats, smart meters and even rectal thermometers, these devices can be attacked; their data manipulated, or they can be used as launchpads for other attacks. Without being an alarmist, the potential for abuse is very scary.

Regrettably, the companies developing these items have demonstrated time and time again that they are not capable of creating devices that cannot be compromised. In some cases, the devices are not even capable of being fixed or updated should they be discovered vulnerable, or should fixes become available. We, as consumers, are left with these insecure ticking time bombs in our homes, further complicated by the fact that in some scenarios, we don’t even own the equipment – we only purchased licences to use the items.

We need the hackers, regardless of their hat colour, now more than ever. Whether their hats are white, black or some shade of gray or if they choose not to wear a hat at all, we need them. We need hackers to find the holes and to alert the companies responsible and – when necessary – to alert the public at large. Without the hackers, we, the consumers will be at the mercy of the security afforded by corporations and governments the world over – and all too often that means no security.

Rather than penalise the hackers, let’s make sure we recognise the valuable contribution they can make to building a secure world, and that they are motivated to join the forces of good, rather than evil. Otherwise, it really will be cyber Armageddon, with the sheriffs in the saloon, and the rest of us fighting the good fight on our own.

Cris Thomas was the editor of the Hacker News Network before joining Tenable Network Security.

Topics in this article : ,