Be Careful Not to Make a Bad Deal

Failing to conduct a thorough review of the cyber risks associated with an acquisition target is inexcusable.

Companies are starting to learn that it is very important to pay attention to privacy and cyber risks when conducting M&A due diligence.

In 2017, Verizon became the M&A cyber risk poster child when it learned shortly before its purchase of Yahoo that Yahoo had suffered two of the largest data breaches in history, in 2013 and 2014, affecting 1.5 billion users. Ultimately, Verizon shaved $350 million off the purchase price.

Yahoo had not told Verizon of the breaches. Concerned that Yahoo might have misled investors, the SEC opened an investigation into the matter. The SEC recently settled with Altaba for $35 million for the 2014 breach, the first such fine it has imposed for failure to report a cyber-security breach. (Altaba holds the remaining shares of Yahoo that were not purchased by Verizon.)

The SEC settlement agreement with Altaba noted, “Yahoo’s risk factor disclosures in its annual and quarterly reports from 2014 through 2016 were materially misleading in that they claimed the company only faced the risk of potential future data breaches…without disclosing that a massive data breach had in fact already occurred…In response to queries regarding past data breaches by Verizon during due diligence, Yahoo created a spreadsheet that falsely represented to Verizon that it was only aware of four minor breaches in which users’ identifying information was exposed, but did not disclose the 2014 theft of hundreds of millions of users’ personal data in its response.”

After the close of the acquisition, Verizon revealed that three billion user accounts actually had been breached instead of the 1.5 billion reported by Yahoo. The lesson here is that companies must do their own due diligence on cyber risks. They must demand full access to technical data and reports to ensure they understand the security maturity of the acquisition target’s cyber-security program and have a clear picture of prior incidents.

What You Inherit

An acquirer should not look merely for past incidents, however, because serious cyber events can occur after an acquisition due to unknown vulnerabilities—and the blame and expense will lie at the feet of the acquirer. For example, Marriott acquired Starwood Hotels & Resorts in 2016. In November 2018, Marriott disclosed that Starwood’s hotel guest database had been compromised and highly sensitive personal data on approximately 500 million guests had been exposed. The data included names, addresses, phone numbers, credit card information, passport numbers, family member information, and travel itineraries and dates. In a statement, Marriott said its investigation of the hack revealed that Marriott had learned “there had been unauthorized access to the Starwood network since 2014.”

Wow. The obvious questions are what cyber due diligence did Marriott do and why wasn’t this uncovered before the acquisition. Within a day, Marriott was hit with a securities class action suit alleging that investors had been harmed due to public misrepresentations, failure to disclose material facts, and material omissions and misrepresentations.

Similarly, PayPal uncovered cyber problems after it acquired TIO Networks in July 2017. A few months after acquisition, PayPal notified TIO customers it was suspending service because it had discovered “security vulnerabilities on the TIO platform and issues with TIO’s data security program that do not adhere to PayPal’s information security standards.” PayPal then issued another statement a few weeks later announcing it had “identified a potential compromise” of TIO’s systems “of personally identifiable information for approximately 1.6 million customers.”

Not surprisingly, a securities class action lawsuit was filed against PayPal a few days later. The suit claims PayPal failed to disclose that TIO’s data security program was not adequately protecting users’ personally identifiable information and that those vulnerabilities “threatened continued operation of TIO’s platform,” making revenues derived from TIO services “unsustainable.” The suit also alleges PayPal “overstated the benefits of the TIO acquisition” and investors were harmed by PayPal’s “materially false and misleading” statements.

Not every vulnerability nor every past or potential breach can be detected, but failing to conduct a thorough review of the cyber risks associated with an acquisition target is inexcusable.

The case, which is ongoing, begs the question: what due diligence did PayPal do on TIO’s cyber-security program prior to its purchase of the company for $233 million?

The possibility of breaches occurring after an acquisition is a risk that companies buy if they blindly acquire targets without conducting good cyber due diligence. Depending on the circumstances, the costs associated with a breach could exceed the purchase price.

In addition to data breaches, it is important for acquirers to investigate whether any of the target company’s confidential or proprietary data may have been stolen or exposed through a cyber attack. This could include pricing and customer lists, intellectual property or trade secrets, strategic information, marketing plans, personnel data or other sensitive information. These data usually represent a significant amount of the value of a company. It is possible, through good cyber due diligence, to uncover breaches, including the theft of data, that had not previously been detected.

Regulatory Costs

Privacy violations and associated investigations are now costing companies serious money. It is crucial that acquirers examine whether there have been prior privacy violations or whether there is the potential for one, which could result in large fines. Such violations may not yet have been detected by the target or reported to authorities. With the May 2018 implementation of the European Union’s General Data Protection Regulation, followed by Facebook’s Cambridge Analytica data scandal, privacy regulators around the globe have their antennae up, and violations can be hefty, far exceeding the paltry $35 million SEC settlement with Altaba.

In January this year, for example, French regulators fined Google $57 million for failing to clearly inform users how the company was collecting data across about 20 Google services, including Google Maps and YouTube, and using it for advertising. In February, British members of Parliament accused Facebook of “intentionally and knowingly” violating privacy laws and called for investigations and increased regulation of tech companies. Later in February, The Washington Post reported the Federal Trade Commission and Facebook were negotiating a multibillion-dollar fine for privacy infringements at the social media giant that potentially violated its 2011 consent order with the FTC.

The bottom line here is that the green shades in M&A due diligence need to bring in some privacy and cyber-security experts to conduct a thorough assessment of the maturity of the target’s cyber-security program, including technical data and reports that could reveal prior incidents. Breaches, class action lawsuits, regulatory fines and investigations can pull millions—if not billions—from the bottom line of the acquirer.

Not every vulnerability nor every past or potential breach can be detected, but failing to conduct a thorough review of the cyber risks associated with an acquisition target is inexcusable. The information gathered can be used to estimate the costs associated with strengthening a weak cyber-security program, defending against prior breaches or lawsuits, or estimating potential penalties. It’s far better to consider these costs in the purchase price than to hope for the best afterward.

Insurance professionals also should work with their clients to help them manage the cyber risks associated with mergers and acquisitions. Agencies and brokerages can leverage the information obtained through the cyber due diligence process to review policies and ensure their clients have appropriate coverage post acquisition.

Privacy’s Perilous Path

Legal use does not always equate to ethical use.

Lots of things happened in 2018 that focused our attention on privacy.

Facebook got everyone’s attention in March when The New York Times and The Guardian revealed that Cambridge Analytica used the personal data of more than 50 million Facebook subscribers to help the Trump campaign.

A former-employee-turned-whistleblower revealed that Facebook never audited the application developers it allowed to access its data to confirm they were using the data according to terms. Facebook subsequently announced it would conduct a thorough review of all application developer use of its data.

The drumbeat on privacy in the United States was enhanced with congressional hearings that probed Facebook on its data-sharing practices. The controversy revealed how 126 million Facebook users might have been played by Russians in an attempt to influence the 2016 presidential election. A few months later, the Times reported that Facebook had allowed numerous device manufacturers, including Amazon, Apple and Samsung, access to user data without Facebook users’ explicit consent, an apparent violation of a Federal Trade Commission consent decree. Then, late last year, the Times obtained documents indicating that Facebook had entered into agreements with at least 150 companies to share its data, including Amazon and Microsoft.

Companies face real risks and perhaps internal disagreement when trying to balance their customers’ privacy expectations and maximize profits.

All the attention fueled investigations over how much of Facebook’s data—and other social media data—are shared with third parties. It also raised questions on what and when Facebook knew about Russia’s manipulation of its platform and users. The Times reported in late November that Facebook’s senior leaders were deliberately trying to keep what it knew about Russia’s tactics under wraps. The company’s directors pushed back on that report, claiming they pressed CEO Mark Zuckerberg and COO Sheryl Sandberg to speed up its Russia investigation and calling allegations that the two executives ignored or hindered investigations as “grossly unfair.”

By mid-2018, online users (that is, all of us) were finally beginning to understand the power of big data. Yet they also realized they really had no idea how every digital fingerprint they leave in texts, emails, Facebook posts, tweets, Google searches, etc., was being shared with others. A Pew Center report in September indicated that more than half of Facebook users changed their privacy settings, 40% took a break from Facebook, and 25% deleted the Facebook app on their phone.

Important lesson: privacy expectations can be more powerful than laws, because its hammer is market forces, not fines or penalties. After the Cambridge Analytica scandal, Facebook was forced to report lower-than-expected earnings. Within hours, Facebook lost $130 billion in market value.

Meanwhile, on May 25, 2018, the European Union’s General Data Protection Regulation took effect, forcing companies to focus on what data they have, where they get it and who accesses it. Shortly thereafter, California enacted the California Consumer Privacy Act of 2018, which takes effect next Jan. 1. The law is similar to the European Union’s data protection regulation, but there are key differences. For example, the California law does not require consent to process personal information and does not include the right to be forgotten or to have data corrected—two important features of the EU regulation. Nevertheless, California’s law is as close as any U.S. law has come to emulating EU privacy requirements, a development that thrilled privacy advocates and scared companies.

Ethics of Data Sharing

Another topic that emerged last year was the ethics of data sharing. Wired ran a story last July headlined “Was It Ethical for Dropbox to Share Customer Data with Scientists?” In a Harvard Business Review article, Northwestern University researchers revealed they obtained data from Dropbox and analyzed the data-sharing and collaboration activities of tens of thousands of scientists from over 1,000 universities. Dropbox justified its sharing of this data by relying on its privacy policy and terms of use. The ensuing uproar caused Dropbox and the researchers to clarify that the data had been anonymized and aggregated prior to their obtaining it. Others, however, pointed out how folder structures and file names could still be used to identify individuals. Dropbox was in the hot seat.

The Cybersecurity Division of the Homeland Security Advanced Research Projects Agency funded a multi-year project examining the ethics associated with the use of communications traffic data by cyber-security researchers. The resulting report, known as The Menlo Report, published in 2012, was an early attempt to establish parameters for the ethical use of personal data in cyber-security research projects.

In 2019, organizations would be wise to analyze the data they buy, share, use and store, to examine their legal basis to do so, and to consider that their customers might have contrary privacy expectations.

The ethics of data sharing is not always consistent. When a researcher finds a trove of data in a cyber criminal’s online cache, the temptation to use the data is probably no less compelling than when Uber was offered Lyft customer receipts in 2017 by A privacy policy or terms-of-use statement might give you legal cover for data sharing, but the users whose data you share—or buy—might question your ethics.

Accenture has studied the ethics of digital data and developed 12 “universal principles.” These include:

  • Maintain respect for the people who are behind the data.
  • Create metadata to enable tracking of context of collection, consent, data integrity, etc.
  • Attempt to match privacy expectations with privacy controls.
  • Do not collect data simply to have more data.
  • Listen to concerned stakeholders and minimize impacts.
  • Practice transparency, configurability and accountability.

Companies face real risks and perhaps internal disagreement when trying to balance their customers’ privacy expectations and maximize profits. Remember that Sheryl Sandberg was reported to favor keeping quiet the discoveries of Russian interference and the exploitation of user data while the chief information security officer at the time favored more public disclosure. Two University of Colorado researchers studied the public reactions to the sale of Lyft customer receipts to Uber and WhatsApp’s announcement in 2016 that it would share data with Facebook to improve Facebook ads and user experience. Their conclusion is noteworthy.

Our findings also point to the importance of understanding user expectations when it comes to privacy; whether most users agree that it’s okay to be the product or not, shaping expectations with more transparency could help reduce the frequency of these kinds of privacy controversies.

But relying on privacy policies or terms of service can be a perilous path. User expectations of privacy will often prevail over legalese. And no one can really keep a straight face and say they believe their users actually read their privacy policy or terms of service. The events of 2018 struck a note of outrage in online users, and legislators, regulators and plaintiff’s attorneys are paying close attention.

In 2019, organizations would be wise to analyze the data they buy, share, use and store, to examine their legal basis to do so, and to consider that their customers might have contrary privacy expectations. Legal use may still violate a person’s expectation of privacy and thus be viewed as an unethical use. Agents and brokers should encourage their clients to be forward thinking on this issue and proactively manage potential privacy risks associated with their data or the data they may obtain from third parties.