IoT Security: What to Expect as a Vendor When Joining the Connected World

Posted by George Yunaev on 2015-07-31 14:30:00

... or lessons to learn from Fiat Chrysler Automobiles (FCA)'s recent mistakes

Many vendors are now adding Internet connectivity to their products, adding more features and enabling the device to send information back to them. Unfortunately, for the vendors who never developed connected products before, these additions also carry a greater risk of having a high-impact security vulnerability in their products. Case in point: thevulnerability recently discovered in the wireless service (Uconnect) of a Jeep Cherokee, which affects several connected cars by Fiat Chrysler Automobiles (FCA) and resulted in recalling 1.4M vehicles. The researchers who discovered it showed how this security flaw could enable hackers to take control over the car’s brakes, engine and electronic equipment.

While the computer industry has developed some rules and expectations about vulnerability testing, disclosure and reporting, the case mentioned above is proof that vendors entering this market might not be aware of them. And as FCA’s case shows, they’re prone to mistakes that can result in system vulnerabilities and even worse.

So, as a vendor that’s just joining the “connected” world, here are some things you might not expect, but you definitely should when dealing with anything related to Internet of Things (IoT) security:

iot_security-jeep_cherokee_hack-resize

 

1. Expect your devices to be studied for vulnerabilities

Read on if you’re thinking:

  • ‘Nobody would bother to study our product for vulnerabilities, since we're too small’;
  • ‘Nobody would dare to study our product for vulnerabilities, since we're too big and have lots of lawyers on payroll’;
  • ‘We will always be notified in advance/requested permission before such study would occur’;
  • ‘Nobody would bother to study our product for vulnerabilities if we clearly state we won't pay for reports’;

As soon as your device is released to general public, expect it to be studied by knowledgeable people attempting to find weaknesses. This might include probes (including hardware probes), code analysis, reverse-engineering, fuzzing and other means of finding vulnerabilities. This is how security research works.

Don't think you're too small or not interesting enough to be a target. Obviously, it depends on the perceived value of the security breach, which means a connected car is likely to attract more skilled researchers than a connected toothbrush. However, even small devices such as WiFi-enabled lightbulbs were hacked, and such hacks were presented at DefCon. As such, estimating the impact of a potential vulnerability should be part of your mitigation plan.

You should also understand the researchers’ motivation, which differs from researcher to researcher. Some researchers are motivated by fame, some are security-conscious and want to ensure a device they and their loved ones are using is safe, and some are making a living by doing research. But all legitimate researchers have the same goal: finding vulnerabilities, having them fixed, and thus contributing to making a product more secure. In FCA’s case it is clear the company didn't even think their cars would be studied for vulnerabilities, and no internal penetration tests were performed, as even the basic network scan would find the open ports.

It is important to understand those researchers are doing a service to the general public, even though their actions may inconvenience you, the vendor – after all, your product should not have vulnerabilities in the first place! So keeping a good, healthy relationship with researchers is very important for your product’s success. Do not expect the researchers to ask for your approval either – this rarely happens, and only when you have a good relationship with them.

Things to do:

  • Expect your product to be tested for vulnerabilities right away, and don't let anyone saying “we will prohibit such testing in our license agreement” to convince you otherwise;
  • When designing your product, have security engineers involved as early as possible. Many vulnerabilities can be eliminated in the design stage, by keeping the components separate – so no issue with your car stereo can affect your car brakes.
  • Perform vulnerability testing on your product. Vulnerabilities like the BMW security flaw and the one found in the Jeep Cherokee would be obvious to any security engineer who would look at the product in-labs. Make sure you have in your testing team dedicated people looking to find vulnerabilities, or hire third-party companies to do that.
  • Repeat the vulnerability testing every time you have a new release. Very often, newly added features also add vulnerabilities.

2. Expect the vulnerabilities to be found

Read on if you’re thinking:

  • ‘We will have no vulnerabilities, our QA team said our device is secure’;
  • ‘We don't need a mitigation plan now; we'll think of something once the vulnerability is found’;
  • ‘If we need to update our device tomorrow, we don't know how we would do that’;

Past experience shows that no connected device is secure. Vulnerabilities were found in Microsoft Windows (including the latest 8.1)[1] and Linux[2] operating systems, in mobile operating systems such as Android and iOS[3], and in many connected devices – from WiFi-enabled bulbs[4] to surveillance and in-home cameras[5], and the already mentioned BMW and Jeep connected cars. So if your device is connected to any kind of network allowing user interaction, its chance to have vulnerabilities is certainly non-zero.

Even if your application itself is secure, the vulnerabilities still may be present in the underlying software you use, the Web server, or in the operating system itself. Of course, for your customers it makes no difference – it is your product which is vulnerable, and they would expect a fix from you.

In FCA’s case, it is clear the company had no mitigation plan. Even though the cars are connected to the Internet, no over-the-air firmware update was possible, and the company doesn't even have means to check remotely which cars are updated. The cumbersomeness of the update procedure invoking a full recall does not allow quick and seamless updates, which means more customers remain at risk.

Things to do:

  • Have a written plan outlining what you would do if a vulnerability was found, and assigning responsibilities. Review it a few times throughout the year to make sure it stays current.
  • Implement a secure way to update your software over network when needed. Keep in mind the local firmware may be compromised.
  • Implement a secure way to update your software locally, in case the device is compromised and cannot connect to network.
  • Test both updates before the production.

3. Expect the information about a discovered security flaws to be published

Read on if you’re thinking:

  • ‘If we make it difficult to report a vulnerability to us, the researchers will give up and forget about it’.
  • ‘We can tell the researcher not to publish this information and they will oblige’.
  • ‘We can gag the researcher with our lawyers or a court order, so nobody else would know, thus our reputation would not be damaged’.
  • ‘We can work on the fix as long as we want, and only then allow the information to go public’.

As a general rule, the responsible researcher who finds the vulnerability discloses it to the public. This is perceived as duty by many – kind of a 'code of honor'. The disclosure serves two main reasons: it puts pressure on a vendor to fix the issue quicker than the typical software release process that takes months, and it informs the public about the issues in the product so people can protect themselves (and put pressure on a vendor if the issue is not fixed).

However, researchers also understand that publicly disclosing the vulnerability without giving a vendor an opportunity to fix the issue may be against public interest. Therefore, most security researchers follow the process called “responsible disclosure”, by notifying the vendor about the vulnerability, and giving them some time (typically 30-90 days) to fix it. After this time expires, the vulnerability information goes public. A typical disclosure policy can be found on Google’s online security blog, here.

Obviously, to notify the vendor, the researcher must be able to reach out to the vendor staff who would understand the issue. In some cases, researchers found it difficult to reach out to right people in affected companies. So they gave up and went ahead with public disclosure right away. Needless to say, this situation would not be in your interest, so please make it easier for researchers to communicate their discoveries.

It is also against your interest as a vendor to attempt to gag the researchers through legal or other means. The community is tight, and the word will go around, guaranteeing this was the last time you heard about a vulnerability in your product from a researcher – and that the next time you will  learn about them from “Morning News” on your national TV, which is obviously less desirable.In some cases, researchers do disclose issues directly to the public. One such prominent example would be the vulnerability in the Onity hotel locks, which went public on Blackhat[6] without Onity being notified in advance. The researcher explained[7] his decision in a well-thought post, and I strongly recommend you read it.

In FCA’s case, my attempts to find the company security reporting page or even guidelines failed. This certainly wasn't easy for researchers either, and the article[8] mentioned that FCA stated

Under no circumstances does FCA condone or believe it’s appropriate to disclose ‘how-to information’ that would potentially encourage, or help enable hackers to gain unauthorized and unlawful access to vehicle systems,

which again confirms they are new to this area and do not know how the security vulnerabilities should be handled.

Things to do:

  • Keep in mind that receiving reports about vulnerabilities in your product before the official public disclosure is a privilege, not a right. So make it easy for researchers.
  • Create a dedicated web page on your site about reporting security vulnerabilities, easy to find through search engines. Be specific of what information you need, and keep it free from legalese and other conditions which may deter people from reporting issues to you. IBM has a good example of such page[9].
  • Allow submissions via Web form or email, and publish your Pretty Good Privacy (PGP) key for e-mail submissions, as the report may contain critical information, and email is not considered a secure medium.
  • Be very careful with who receives those submissions internally. The information may be critical, and the clock starts ticking from the moment you receive the submission.
  • Be ready to engage with the researcher right away, do not wait until the last minute.
  • If a researcher disclosed a bug without reaching out to you – it is generally your fault as a vendor. See the Onity case above – if they clearly promised on their reporting page that they will respect and support the responsible disclosure rules and will not attempt to cover it up, they would receive the disclosure. Hence you should definitely reach out to a researcher in a case like that, and find out what stopped them from disclosing the vulnerability.

4. Expect your actions after the vulnerability is found to be scrutinized

Read on if you’re thinking:

  • ‘Vulnerability in our product makes us look bad, and allows people to sue us. Hence we must protect the corporate image at all cost, customers come next’.
  • ‘Customers don't need any details about the vulnerability, especially if they make us look bad’.
  • ‘Customers don't need to know what other changes we're making to secure our product, as they don't care if the same vulnerability appears again (for example through Bluetooth)’.

Your reaction to the published vulnerability is very important. The public looks closely at you, trying to judge not only how fast you fix the vulnerability, but how trustworthy you are in general. For example, people often ask questions like: ‘would you have fixed it at all if it wasn’t for the public disclosure?’ ‘Was it an oversight in an otherwise strong process, or anyone can dig up a hole just by looking further?’ ‘Whose interest are you protecting, your own or your consumers?’

FCA's actions were far from perfect. Overall, it gave a strong impression it tried its best to keep this information private, and prevent the information about the vulnerability from going public even after it had been fixed. The official public release[10] and cars recall were only announced on the last minute, after intense publicity in newspapers and on national television. Even after that FCA showed they cared more about protecting their corporate image than about the safety of their customers by downplaying the importance and severity of the issue. Just an example:

This update is providing customers with an additional level of security by protecting their FCA vehicle from potential unauthorized and unlawful access, a Fiat Chrysler spokesperson explained to eWEEK in an email[11]

According to Fiat Chrysler statement, having your vehicle protected from unauthorized access is considered anadditional level of security – one you are not getting by default, and have to drive to a dealership to get (is it possible they might start charging soon for this “additional” service?).

Things to do:

  • Accept as fact that if you're in the US you are likely to be sued anyway, and nothing you say in your corporate statement would prevent that.
  • Give details to your customers. Your customers have the right to know the limits of this risk, so they can decide whether they're willing to accept this risk or not, depending on personal circumstances. For example, in the Chrysler case some high-profile targets may decide to leave their car parked until the update is available, while people living in the area with no cell coverage are much less at risk.
  • Explain clearly and realistically what the threat is, what is the potential scope, and how – if at all – the users can mitigate the threat until the update is fixed. For example, if disabling a cell module requires removing a fuse, and does not affect the car in another way than losing connectivity, this may be a viable option for some customers.
  • Explain what the mitigation plan is, both short-term and long-term. An update is a short-term mitigation only. What’s needed as well is a proper explanation why the car brakes and the engine could be accessed through the communication module, and what has been done to cut the access or limit it.
  • Give credit to researchers helping you to find out the vulnerability. Remember, they did the job your QA department should have done.
  • Finally, apologize. You have messed it up, so at least you owe an apology to the users.
IoT Security Point of View 
 

 

[1]          http://www.cvedetails.com/product/26434/Microsoft-Windows-8.1.html?vendor_id=26

[2]          http://www.cvedetails.com/vulnerability-list/vendor_id-33/product_id-47/year-2015/Linux-Linux-Kernel.html

[3]          http://www.cvedetails.com/vulnerability-list/vendor_id-49/product_id-15556/year-2015/Apple-Iphone-Os.html

[4]          http://arstechnica.com/security/2014/07/crypto-weakness-in-smart-led-lightbulbs-exposes-wi-fi-passwords/

[5]          http://www.insecam.org/

[6]          http://daeken.com/2012-07-24_Blackhat_paper.html

[7]          “Onity, after 20 years and 4-10M locks, has a vested interest in this information not getting out, as it makes them look bad and costs them a significant amount of money. As such, it's likely that without public pressure -- which we've seen in the form of unrelenting press coverage -- they would have attempted to cover this up”. http://daeken.com/2012-12-06_Responsible_Disclosure_Can_Be_Anything_But.html

[8]          http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

[9]          www.ibm.com/security/secure-engineering/report.html

[10]        http://blog.fcanorthamerica.com/2015/07/22/unhacking-the-hacked-jeep/

[11]        http://www.eweek.com/security/fiat-chrysler-auto-recall-highlights-rising-fears-about-iot-hacking.html

George Yunaev

George Yunaev is a Senior Software Engineer at Bitdefender. He joined the company's OEM Technology Licensing Unit in 2008, after working at Kaspersky Lab for seven years. Aside from developing SDKs for various OEM solutions, George is also providing partners and prospects with useful insights into emerging threats and potential pitfalls of technology licensing. His extensive software engineering experience of 19 years also covers reverse-engineering and malware analysis. He is based in Silicon Valley, California, and enjoys traveling and active sports such as skydiving and wakeboarding.