Nav Search

It’s Monday morning, and you reach for your smartphone. Before ever leaving bed, you’ve posted a location-based tweet, logged your sleep via your wearable wristband and checked your bank statement.

You remember to turn off the lights on your way out the door; are you using less energy than you did last week? You check your weekly energy usage on your phone as you wait for the bus, swiping your microchipped transit card to pay your fare.

You’re not even at work yet. Do you know where your data is? Maybe not where you think.

In a data-driven world, innovative technologies allow us to quantify everything from our learning to our “selves.” Yet, these new digital identities also open us to a new suite of privacy issues for which there are few legal answers.

According to Alex Reynolds, senior regulatory counsel at the Consumer Electronics Association, consumer concerns are largely about “expectations versus reality.” Just as companies are beginning to leverage data from people’s personal devices, consumers also are realizing that they have an online identity.

“Consumer expectations evolve over time, and we’re looking at an inflection point,” he said. “It is as simple as a transition period.”

It used to be that a limited number entities—schools, utilities, health care providers—controlled most forms of data. In addition, those data were kept on paper and stored in filing cabinets. As a result, governments could more easily regulate the use and sharing of people’s personal information.

Not anymore. We’ve entered the Wild West of data, and there often are few regulations to police and govern what new companies can and cannot do with users’ data—even in otherwise-regulated industries like education, energy, health and cities.

But that doesn’t mean startups have complete freedom. Quite the opposite: Some of the most well-funded platforms have been brought down by failure to engage in conversations about transparency. As a result, the data industry is a minefield for young companies that fail to put privacy first.

“Sometimes the conversation about the benefits of how companies can leverage data gets lost in discussions about how it’s creepy—and that’s completely legitimate concern,” Reynolds said. “A lot of the issues arising (are because) we’re thinking about all data as just data, and we’re using those terms to capture so many different things that are going on.”

So how should privacy-minded consumers and entrepreneurs proceed? What data are companies collecting? What are best practices for areas in which there is no regulation yet? This in-depth report will explore the future of data privacy in four industries, breaking down one major concern in each.

Education | Energy | Health | Cities

Education: How Privacy Advocates Brought Down an Edtech Giant

inBloom would have offered schools the ability to track 400 types of student data, but misrepresentations of the startup in media ultimately led it to shut down. (Illustration by Jihye Kim)

Today’s parents can log onto Internet portals and monitor their children’s progress and grades. Is that data? Absolutely—and schools have always used that kind of data to help students learn.

And with the rise of educational technology, new tools are sparking conversations about student data—and the ways we can use it. Technology now can enable teachers to crunch numbers quickly and see trends all in one place. Moreover, by helping teachers make data-driven decisions, educational-technology platforms free teachers to focus on human interactions with students.

Much of the data that edtech platforms track are things that schools have always collected: grades, attendance information, academic subjects, achievement test scores or learning disabilities. Now, though, schools also can track information like character traits, social security numbers, “family relationships … and reasons for enrollment changes,” according to The New York Times.

All of these are potentially useful for schools—but they’re not directly related to a student’s academic record. In addition, these are data that parents and students might not want in someone else’s hands. And although federal law regulates when schools can share personally identifiable information with third parties, rules alone haven’t been enough to assuage a tidal wave of parent concerns against edtech.

The most high profile example is inBloom, a tool created by the Shared Learning Collaborative. The SLC was a joint project between The Gates Foundation, Carnegie Foundation, several state chiefs of education and various other leaders.

According to Iwan Streichenberger, inBloom’s former CEO who came onboard in 2012, “(inBloom) was a bottom-up story where educators came together and asked for help to manage a huge amount of data. The foundations wanted to help state chiefs to build tools to manage data better.”

inBloom was a data-management system that would have allowed teachers to collect, aggregate and sync student data between other platforms. The tool they built was unlike anything that ever had been built before: It aimed to be completely neutral, allowing all platforms to integrate seamlessly. It would not sell data, monetize data, or provide applications for data.

In short, it would have abided by the Family Educational Rights and Privacy Act, a federal law passed 40 years ago to protect student data against misuse. FERPA prevents manufacturers of third-party educational tools and software from selling or marketing students’ personally identifiable data. In other words, when startups step into the classroom, they play by strict rules—the same rules that apply to data housed in an online grades portal, for example. Those data are available to parents only under FERPA.

“(Data privacy) is not just about tech in the classroom,” says Kathleen Styles, chief privacy officer for the U.S. Department of Education. Her job is to advise the secretary of education on issues related to privacy and to enforce FERPA and other external-facing privacy statutes.

“Out of necessity, schools create and manage large amounts of data about a student,” she said. “A couple things have changed and there is more student data—and more types of it.”

Up to 400 types of it, in fact—that’s how many types of data inBloom had the ability to track. Can one, 40-year-old federal policy stretch far enough to cover all of them?

To prohibit misuse of so many types of data, protections for data are an amalgam of federal-, state- and district-level policies. It takes time for education departments to adapt to new technologies—and that has led some groups to assert that FERPA is not adequate or sufficiently modern, Styles says. Yet, state legislators have been extremely active when it comes to student data. In the last state legislative cycle, 39 states introduced 110 bills that deal with privacy.

However, not everyone believes those policies are enough, and Leonie Haimson is one parent who is sounding the alarm against potential misuses of student data. Haimson heads up a coalition called Student Privacy Matters, a group of concerned parents and educators who say that the government is not doing enough to enforce protections of student data.

“They use terms ‘personalized’ or ‘student-centered,’ but it’s all being left up to machines with canned software programs,” she said. “There are no people involved in ‘student-centered learning.’”

Those terms are common edtech buzzwords, often thrown around by entrepreneurs who aim to genuinely improve student-learning outcomes. But what happens when companies don’t actively—and accurately—communicate with parents?

In retrospect, Streichenberger says, that’s where inBloom went wrong: assuming the company didn’t need to speak for itself because it was “just software.” Not so. Instead, inBloom’s silence on student data collection and privacy led it to become the scapegoat for massive fighting between parents, districts and policymakers.

Streichenberger says he dedicated considerable resources toward privacy protections from day one. There were never any demonstrated data breaches—or even indication that one might occur. However, those safeguards weren’t enough to stop the swell of public opinion against inBloom after it launched at SXSW in 2013, where one outlet depicted inBloom as Big Brother, straight out of George Orwell’s classic novel, 1984.

With no press strategy or other means by which to defend itself, inBloom was backed into a corner. Just one year after it launched, the well-funded and controversial inBloom announced that it would cease operations.

What does the government say about all of this? The DOE has no official position, because the SLC and inBloom were state-level efforts, says Styles. “But we do have an interest in having an accurate, informed conversation about the use of student data, and I’m not sure that’s what we saw in inBloom.”

Part of the problem stems from the larger tech community itself. According to Aimee Guidera, director of the Data Quality Campaign, concerned parents and technology enthusiasts don’t always communicate well about the role of technology in the classroom.

“When we talking about digital learning, (edtech critics) act like it will take the joy out of learning and make us digital learners,” she said.

Entrepreneurs, in turn, get caught up in excitement about their product and often forget to communicate that excitement to parents in a way they can understand, says Guidera. As a result, the platforms that succeed in winning consumer trust will be those that engage parents and districts in conversations about data collection and the value it provides to the student—possibly through focus groups, social media or townhall meetings.

“We need to focus on talking about value of data, why it matters, making positive reinforcements of how we can’t not use data,” she said. “We have this incredible new tool and our kids deserve for us to use it to get better outcomes for them.”


Energy: What Would Happen If Walls Could Talk (And They Can)

(Illustration by Jihye Kim)

On average, homeowners spend about six minutes a year thinking about their energy bills—but a new class of energy companies and their digital meters are changing that.

Digital “smart” meters now can monitor a home’s energy usage at frequent intervals, wirelessly relaying that information back to the utility.  In some cases the meters also can provide feedback to residents in real time, either online or via a smartphone platform.

“(The smart grid) can tell you your carbon footprint based on when you use energy,” said KC Boyce, assistant director of the Smart Grid Consumer Collaborative. “It’s unlocking those sources of value.”

A majority of U.S. states have begun deploying the wireless meters, thanks to a push from federal stimulus funds, Boyce says. As grid modernization continues, technology can help the utilities meet power demand, help consumers lower their electricity bills and help cities reduce their carbon footprints overall.

Yet, these conveniences raise troubling questions. What kind of trends could this data reveal about a household? And who will be watching?

In theory, utilities and consumers aren’t the only ones. Smart meters generate valuable data, the kinds that many third parties might pay to have.

By design, smart meters relay information about what is happening inside a person’s home, says Lee Tien, senior staff attorney at the Electronic Frontier Foundation. As a result, the data could suggest the types of devices in a home to marketers, who might use the information to target a consumer for sales. A court or attorney could subpoena it as evidence about a person’s suspicious activities. Or used illicitly, the data might reveal the hours at which a resident is or is not home—priming a burglar about when to break and enter.

“The big privacy question with energy usage has always been what it reveals about what is happening inside a person’s home,” Tien said. “When you have a data pipe, represented by your electrical usage, that leads outside, it’s not the same anymore.”

So what keeps a utility provider from selling consumers’ data? Each state has its own laws that regulate utilities’ operations—though almost all of these laws pre-date the smart grid.

But at least there’s that. When it comes to third-party energy-efficiency companies, such as Nest or Opower, utility regulations may not apply. In theory, that means a the manufacturer of a smart thermostat is only subject to the privacy policy a user signs before activating a device. If a consumer does not explicitly opt out, he or she may be subject to any data sharing—or selling—that occurs between the software provider and other companies.

The privacy policy is key, says Chris Babel, CEO of TRUSTe, a data privacy management platform. Unless a company is up front in its privacy policy about what data it is collecting and how the data is being used, anything not included may be fair game.

TRUSTe formed in 1997, at the beginning of the e-commerce boom. People realized they were giving so much information to retailers, and the notion of a privacy policy didn’t exist, Babel says. Now, TRUSTe offers certifications, including a Smart Grid Privacy Certification, to help companies develop privacy policies and audit their practices.

“A strong privacy policy serves two purposes. For one, transparency sends a message to a consumer. If something goes wrong, (it shows) what you’ve committed to—it’s a regulatory safeguard,” he said. “It’s also your main message to your consumer.”

The energy-efficiency industry regulated itself thus far, and unlike in education, there hasn’t been any instance of a cautionary tale to put the issue in the limelight for policymakers.

Government researchers published a flurry of information on the topic in 2010. The National Institute of Standards and Technology released guidelines on smart grid cybersecurity, which formed the basis for recommendations from the Department of Energy and Federal Energy Regulatory Commission.

According to NIST, “It is important to note that while Smart Grid privacy concerns may not be expressly addressed (in current laws), existing … regulations may still be applicable. Nevertheless, the innovative technologies of the Smart Grid pose potential new issues for protecting consumers’ privacy that will have to be tackled by law or other means.”

A similar report from the DOE acknowledged “the traditional responsibility of state utility commissions in regulating issues associated with data privacy,” and commended “the efforts of third-party service providers and consumer groups to foster responsible data access to achieve the goals of Smart Grid.” It endorsed improved consumer education on data privacy—but included no concrete policy recommendations.

Essentially, DOE passed the responsibility back to state-level authorities and consumer associations.

Babel thinks that’s just as well. Self-regulation by way of certifications and transparency is more effective for growing consumer trust than any further legislation for smart meters would be, he says. In addition, self-regulation can provide more nimble and quick answers as an industry develop.

Similarly, the creators of Nest have maintained strong privacy principles from the very beginning—and after their acquisition by Google, Boyce says. They made clear that Nest wasn’t storing users’ data on Google servers or sending it to a big data farm.

According to Boyce, Nest grabbed a significant share of the (admittedly small) smart thermostat market by asking people to give up a portion of their privacy in exchange for something more valuable in return. That’s a valuable lesson that many Internet-era companies need to learn.

“Do people feel good about giving up this measure of privacy for what they get back?” he said. “There will be people playing with that privacy-versus-value question.”

At the moment, though, smart meters haven’t been deployed widely enough to raise much discussion in public conversations, he says. If you expand the definition of smart grid to include the Internet of Things, though, both public and private players will need to figure out privacy and security if those connected devices are going to be exposed to the Internet.

“Startups need to be thinking about how to figure that out,” Boyce said. “If we do start thinking about privacy and security, we can get it in place as they become more and more mainstream.”

Babel says transparency is the key for any company looking to scale in the smart-grid space.

“Be transparent in what you’re doing: Notice, choice, give access,” he said. “Let consumers know what you’re doing, and let them have the ability to say no. Let the consumer know what access you have on them. Share the data back.”


Health: Why HIPPA Isn’t Protecting All of Your Health Data

Of the top 600 health smartphone apps, only 183 have a privacy policy—and only 62 of those privacy policies directly refer or relate to the app in question. (Illustration by Jihye Kim)

We live in a world where wrist-worn devices generate data about everything from our sleeping habits to our heart rates. Moreover, technology now allows us to share that information wirelessly with anyone, any time, anywhere.

We’re approaching the age of the quantified self, thanks to the devices we wear and carry with us. While we go about our lives, they’re creating enormous amounts of mostly unregulated data about our daily habits and biometrics, most of it stored in the cloud. Meanwhile, some of the most important data about us—our medical records—is headed online as well. But do we really want medical data housed on the Internet, where it’s vulnerable to security breaches?

“EHR adoption and having data available in electronic format creates different kinds of security risks and challenges than in a paper world,” said Helen Caton-Peters, a privacy specialist at HHS.

Electronic health records have been adopted across the vast majority of large providers, but many small clinics are only now entering the Internet era. In addition, clinics are increasingly looking to adopt third-party patient portals where patients can access and manage some aspects of their own medical data, Caton-Peters says.

“(Developing new patient portals) may be a place for innovators to step in,” she said, “but that’s where security and privacy come into play: What information are you posting? Where is it available? And who can see it?”

When your medical data goes online, the companies and platforms that store it on behalf of your doctor are required to play by the rules of the Health Information Portability and Privacy Act. According to HHS, HIPAA regulates the flow of individually identifiable health information between insurance companies, health care providers and certain business associates. The intent behind HIPAA was somewhat different than how people tend to think of it today, says Caton-Peters.

“The intent of HIPAA was to improve electronic-claims sharing and combat waste and abuse,” she said. “Policymakers saw that there were no overarching policies regarding information traveling electronically, so it was important to develop regulations to protect the data.”

Now, though, consumers tend to misunderstand, believing that all health data is secure—and private—under HIPAA. That’s not exactly the case, says Grant Elliot, CEO and cofounder of Ostendio, a risk-assessment and compliance-management platform.

“People who are trying to develop a HIPAA-compliant privacy policy are taking a very narrow security strategy,” he said. “It will open the gate for all sorts of other scenarios where that data could be exploited.”

HIPAA sets guidelines for a very specific way that data is used, by certain defined entities. A hospital or anyone directly involved in delivery of care is directly covered under HIPAA. Anyone who helps perform that care on behalf of a doctor or health system—such as a startup that works with a hospital to store records online—is defined as a business associate and covered entity, as long as they are contracting with a care provider.

But use that data in a slightly different situation, however, and HIPAA no longer applies. If patient asks her doctor for her EHR information, in order to store it in a third-party patient health record that isn’t working directly with her doctor. Information in patient’s hands is not covered by HIPAA, Elliot says.

“Putting that into a PHR, it’s the same data (as when the hospital was using it)—but it’s no longer covered by HIPAA,” he said. “If you contract with an entity and you’ve signed their terms and conditions, because they’re not a covered entity, HIPAA doesn’t apply.”

And that’s where things really breakdown. It used to be that a patient’s hospital or doctor generated most health-related data. With the explosion of cloud-based services and mobile apps, many tools now provide some level of diagnostic or health information—and none of them have to play by HIPAA’s rules.

“Any organization that claims to be HIPAA compliant—well, that’s great, but it doesn’t apply to any of that data,” Elliot said.

Only about 1 in 10 health apps has a privacy policy, according to a Boston University study. (Illustration by Jihye Kim)

So what keeps companies accountable to protect health data if they’re not regulated by HIPAA? Some app developers have sought Food and Drug Administration approval for their technology. Now, an elite cadre of apps has received FDA approval and only can be used on a “prescription” basis.

Others, including Apple, have taken the self-regulation route, adjusting the rules for third-party developers who use the Apple App store. According to the recently updated iOS Developer Library, Apple’s new HealthKit places data controls in the hands of users, who must grant permission to each app before it can track and collect data.

“Users can grant or deny permission separately for each type of data,” Apple states. “From the app’s point of view, if the app has been denied permission to read data, no data of that type exists.”

Furthermore, the App Store will reject any HealthKit app that lacks a privacy policy, stores user data in iCloud or uses data for any purpose except health management or medical research.

“You cannot sell information gained through HealthKit to advertising platforms, data brokers or information resellers,” Apple states. “Even with permission, you can only share information to a third party if they are also providing a health or fitness service to the user.”

In other words, Apple is saying no to startups and apps that collect personal data and use it for secondary purposes. All of this is good news for consumers—but HealthKit-compatible apps are only a fraction of all health apps on the market.

Moreover, according to researchers at Boston University, less than one-third of all popular health apps even have privacy policies. The BU study examined 600 of the most popular apps and found that 185 had a privacy policy, most often outside of the app itself and on the developer’s website—and only 62 of those policies even addressed the app in question.

That means about 1 in 10 mobile health applications has a policy that discloses what information the app collects—and there are more than 24,400 health apps out there, according to the study.

So that “health” information generated by a wrist wearable? It’s not quite private, depending on the software developer’s policies—which may or may not even exist. And while developing a readable, navigable data-use policy is a must-do for every company, the issue also should spark a discussion about what health data is, says Alex Reynolds, regulatory counsel at the Consumer Electronics Association.

“We have to … have a conversation about the distinction between the data being collected for medical purposes versus wellness (and) fitness data that is not being evaluated by a doctor or to make a decision about treatment,” he said.

CEA doesn’t advocate that data typically being collected typically by wearables—a user’s steps, heartbeat, or sleeping patterns—is something that should be regulated in the same way as health information. Because wearable data isn’t data for treatment, it doesn’t fall under HIPAA.

The only exception, says Elliot, would be if a doctor prescribed a wearable and accompanying software as a treatment device. In that case, the software developer would be a third-party covered entity—and therefore required to comply with HIPAA.

“If … the relationship is between you and the third party, HIPAA doesn’t apply,” he said. “The only thing that governs how FitBit (for example) manages data is the privacy policy that you sign when you download the app and buy the product.”


Cities: Are We Better Off in a World of Big Data?

(Illustration by Jihye Kim)

(Illustration by Jihye Kim)

The previous three sections have explored the ways in which privacy affects individual industries, and taken together they beg a larger question: Are we better off in a world of big data?

It depends how you define it.

According to John Tolva, the former chief technology officer for the city of Chicago, cities have been collecting data—on neighborhoods, streets and residents—for centuries. The city of Chicago, for example, has parcel-by-parcel “big data” dating back to the 18th and 19th centuries; after the fire that destroyed much of the city in 1871, the city created insurance maps to ensure that such an event wouldn’t happen again.

Fast forward 130 years. Tolva says Chicago officials now use data from many new sources—ranging from posts on Twitter to machine sensors—to monitor and improve life in the city.

“If you want to intercede,” he says,  “you need to listen to the heartbeat of the city—and that’s people talking about it.”

As CTO, Tolva could look at data on land-use patterns to see where businesses were thriving and correlate it to transit accessibility, or look at data on childhood obesity and determine whether or not fresh food was available. His team also could monitor Twitter as an early-warning system for food-borne illness at restaurants.

“Twitter is public,” Tolva points out. “It’s not surveillance. You can say that’s collective observation.”

Of course, city dwellers can choose not to broadcast their lives on Twitter, limiting the possibility of “collective observation.” But those same people may find it harder to opt out of other services, such as a registered transit card to pay for public transportation or sensors on streets and retail stores that register location based on mobile-device signals.

All of those data come together to form a person’s digital identity, and consumers and companies alike are still figuring out how to navigate that, says Alex Reynolds, ‎senior manager and regulatory counsel at the Consumer Electronics Association. As consumers’ digital lives evolve, so, too, will their expectations about shareable data versus private data. And with so many types of data at companies’ fingertips, companies are looking to determine what kinds of data are important.

They’re also trying to figure out what people consider “creepy.” For example, retail customers may feel violated to realize a company knows their location—even though there is nothing illegal about picking up a Bluetooth signal from a person’s smartphone. Similarly, Uber tracks riders’ geo-location in real-time by default; it uses the information “to address user or driver support, technical, or business issues that may arise,” improving the experience. But Uber’s data scientists also can analyze that data to predict where you’re going—and they’re right 74 percent of the time.

As a result, the benefits of how companies use data get lost in discussions about how it can be “creepy.” That poses a potential problem for countless businesses in the Internet economy that are built on doing interesting things with people’s data, Tolva says.

As a result, entrepreneurs seeking social good by way of public-private partnerships and city data should be wary. If a company isn’t engaged in a dialogue with consumers about the data it is collecting, backlash can be brutal, says Reynolds.

“Twitter is public. It’s not surveillance. You can say that’s collective observation.” John Tolva, former chief technology officer of Chicago (Illustration by Jihye Kim)

“Twitter is public. It’s not surveillance. You can say that’s collective observation.” John Tolva, former chief technology officer of Chicago (Illustration by Jihye Kim)

According to Reynolds, consumers are savvier than industry give them credit for, and entrepreneurs should remember that these people are using their products on a daily basis. That ought to be the best incentive for transparency: to develop consumer trust in a product.

“We should approach this by saying companies and government should have maximum flexibility to design and market products to facilitate rapid innovation,” he said. “We … also should provide consumers with strong, reasonable, contextually appropriate options to control privacy and security.”

But what happens when one’s desire for privacy—and control over it—intersects with public safety? City police departments often manage extremely robust data sets—and for good reason. Citizens’ well being depends on it.

Matthew Bromeland, executive director for initiatives and research and special assistant to the metropolitan police chief in Washington, D.C., says it’s a tricky challenge to manage and utilize all of the data at law enforcement’s fingertips.

“We’re talking about millions of records and balancing it—not just from the individual privacy standpoint, but also requirements that we be transparent,” he said.

And that means police departments must constantly evaluate new technologies not just in terms of usefulness, but also in terms of privacy. To help do that, the D.C. MPD hired Liz Lyons, the first-ever chief privacy officer in a major metropolitan police department, earlier this summer.

Lyons is a privacy expert, a lawyer turned public official who can rattle off the various lengths of time that MPD can store different types of data. For closed-circuit television footage, that’s a mere 10 days; for cold cases, it’s 75 years, she says.

That seems like a broad spectrum, Lyons says. Yet, citizens can rest knowing that laws govern exactly how MPD can use any data: only to solve crimes or make a city safer. Moreover, the vast majority of law enforcement data is based on incidents—where and when an offense occurred—not individuals.

It isn’t exact, but it doesn’t need to be. Incident-based trends allow MPD to predict areas likely to have crime spikes and put police in those areas. Another example of incident-based data is ShotSpotter, a series of gunshot-triggered sensors that can triangulate a shot’s location.

“It’s a great privacy tool because you have to have done the thing before anyone cares,” Lyons said. “The gunshot triggers it, and the police only want to find you if you did.”

In addition, Lyons says the police department uses data the same way that companies do: to improve people’s daily lives. The different really is what different people are willing to sacrifice, she says. Many people would consent to letting police use anonymized household data at the aggregate level if it means their neighborhoods become safer. But are they willing to be tracked via a transit card in exchange for a bus that runs more often to busier stops?

“Weigh your base,” she advises. “Some people would give up their privacy if that means the bus runs better—but I wouldn’t.”


Takeaways: Looking Ahead

With so many new ways to use data, it can be easy for companies to get caught up in possibilities for innovation. And while certain innovations have the potential to make our lives easier or more convenient, the privacy issues associated with them aren’t going away.

In fact, they’re only getting more complicated, says Jules Polonetsky, director of the Future of Privacy Forum.

And since privacy issues aren’t getting any simpler, there needs to be one unified dialogue between the entrepreneurial community and consumers. Currently, those conversations tend to be siloed, says CEA’s Reynolds. Entrepreneurs may not hear users’ concerns until it’s too late.

“There needs to be a feedback loop, a constant dialogue, not two parallel conversations,” Reynolds said. “Ultimately, there has to be one conversation … taking place where consumers can put forth their concerns specifically.”

In order to have that conversation, though, entrepreneurs in every industry need to invest in educating themselves about privacy and security, CEA’s Reynolds says.

“It’s not a matter of knowing the arcane nuances of compliance,” he said. “It is about attaining a basic level of knowledge about how privacy and security options … affect the way consumers interact with products and their trust.”

It’s a tricky equation—and entrepreneurs need to be mindful about how their product, app or service fits into it. Moreover, there is no silver-bullet solution. Reynolds says it’s a matter of asking common sense questions: What kind of data am I collecting? Where is it going? Where is it being stored?

More and more, customers—whether organizational, such as schools and clinics, or individuals—use data security and privacy as a differentiating factor between products. All other features being equal, the product with a well-articulated privacy policy wins.

Streichenberger says that alone is so important that startups should hire privacy policy experts, who can help other team members understand—and articulate in plain English—what data they can collect, how they should store it and for how long.

Let inBloom serve as a cautionary tale, he says.

“You can have an amazing company that gets in trouble over night because it isn’t prepared with those things.”


Want to read more about privacy? Sign up for our newsletter and we’ll deliver it straight to your inbo.

Melissa Steffan Headshot

Melissa Steffan

Melissa is the former assistant editor for 1776, where she worked on the media team to create compelling, idea-driven content and reporting. A Seattle native, she graduated from Seattle Pacific…