Image: Rebecca Lipman
Image: Rebecca Lipman

On a near daily basis we find ourselves sharing our personal information on the internet. From making online retail purchases to updating our social media accounts and everything in between, we’re regularly being asked by companies and organizations to provide our name, email address, date of birth, and mailing address among various other data fields.

When completing these personal data transactions, we’re all familiar with being presented with multi-page privacy statements, which we may skim through but likely won’t have time to read in full before checking an “I accept the terms and conditions” box. With limited time to scrutinize all the policies we’re agreeing to on the web, most would like to assume that our information is being protected. There must be common regulation in place to keep data in check, right? Actually, according to Rebecca Lipman, Law Clerk in the District of New Jersey, there isn’t. In reality many companies take our data and not only use it for their own research but also make it available to other companies, including some that are simply in the business of buying and selling data, called data brokers.

In a recent article titled, “Online Privacy and the Invisible Market for Our Data” (forthcoming in the Penn State Law Review) Lipman explored the legal boundaries, or lack thereof, of how companies can use and share personal information online and what it means for consumers. We caught up with her to learn more.

Q&A with Rebecca Lipman

Can you briefly overview your article?

RL: Online privacy is a thing a lot of people are concerned about today, but often in sort of an ill-defined way. People aren’t sure if they’re nervous about a data breach, which is a security issue, or if they’re concerned about the amount of data companies are collecting about them, which is more of a privacy issue. There’s a sense that people are resigning themselves to the situation to some degree because, although they want to know what’s being done with their data, they don’t know how to find out or what to do about it. I wanted to give readers an overview of what is actually being done with their data within the commercial sector – and then propose a regime that would give consumers meaningful notice about what’s being done with their data instead of the lengthy privacy policies we have in place right now, which, frankly, no one reads.

In your article you mention that when people share their data with one company, it is often then shared with other companies including data brokers - can you explain this?

RL: A lot of problems we have in online privacy result from the fact that people are very used to sharing their information one-on-one. So if I tell you something, the worst thing that’s going to happen is you may share that information with one other person that I’m not happy about. But online, the web of corporations that are going to receive your data is so much more attenuated than that. When you go on a single website there are often different trackers that are monitoring your activity because your information is being sold by the primary website you’re on, or being shared for other reasons.

On the first page of my article I talk about how if you’re on the Centers for Disease Control and Prevention’s (CDC) website searching for some embarrassing symptoms, Google Analytics receives that information because the CDC uses Google Analytics to measure its website traffic. So the CDC’s not trying to turn a profit on you, but nevertheless, your data is being shared without you knowing about it. When you’re on a commercial website like New York Times or Gawker – there’s dozens of other companies that want to know what you’re looking at – so the website you’re visiting may give other companies that information for a price. That’s just a part of the online economy at this point.

Data brokers collect information about virtually all consumers in the country. The largest one, Acxiom, has something like 3,000 data points about every consumer. One of the main uses for this data is advertising. If a company’s selling cat food, they don’t have an interest in you particularly unless you own a cat. So they want to be able to go to a data broker and say “our company’s strong in the Midwest, I want to talk to middle-aged female cat owners in Ohio” and the data brokers can provide them with a list.

What concerns arise from companies sharing information with data brokers?

RL: A big problem, besides it being a little creepy, is often the data brokers’ information is wrong. At any given point, approximately 30% of the information in a person’s profile is going to be wrong. If the only issue is that some people might get some imperfectly targeted advertisements, that’s not a tragic consequence.

However, there is the potential for much more serious consequences. Spokeo previously advertised its database of profiles as an employment-screening tool. If Spokeo had information about an applicant’s education and work history that was incorrect and therefore conflicted with the applicant’s resume, an employer may have rejected the applicant for apparently lying about his/her background. The Federal Trade Commission (FTC) alleged that Spokeo failed to ensure its information was accurate and ran afoul of other requirements under the Fair Credit Reporting Act. Spokeo consequently paid to settle the charges. The FTC has also expressed concern that data brokers’ practice of selling customized lists of consumers like “Urban Scramble,” which consists of low-income minorities, or “Rural Everlasting,” which consists of poorly-educated senior citizens, could facilitate illegal discrimination by retailers.

So data has potential for real harm, depending on how it’s used, and data brokers operate with a fundamental lack of transparency or liability. Spokeo was pursued by the FTC for what they were doing on the employment front because there was a specific statute about what you can do when you operate as a consumer reporting agency. But the protective statutes we have in place are spotty. We do not have overarching privacy regulations, our laws just protect specific sectors like health or credit information.

What sort of narrow regulation around data collection and use exists at present?

RL: So there are things like the Fair Credit Reporting Act, where if you’re distributing credit information about a consumer, there’ll be a lot of regulations in place for assuring accuracy and giving rights to consumers. But data brokers are usually careful enough to not operate in any one of the few regulatory buckets that do exist. They will do things that are outside them. For example, those interested in healthcare data won’t traffic HIPAA-related information, because it’s protected, but they can legally collect what medications you’re buying online and any user-generated health data they can get (like information from your FitBit), because HIPAA only protects hospital or doctor records. So, largely, the data brokers are operating outside of any kind of regulatory framework.

How do you think these data-sharing issues could be addressed legally and in terms of communication with the public?

RL: There’s a lot of debate as to whether giving people better notice about how their data will be used by companies and organizations can work at all. And that’s a fair question – peoples’ attention spans are finite and they give very inconsistent responses about how much they care about privacy versus what they are actually willing to do to protect their privacy. So it can be very hard to gauge what’s possible with notice, and how much this is even a real problem that needs to be addressed. But, I think if you do have effective notice, you get a lot closer to figuring out if people are changing their web surfing habits when they have effective notice, which can indicate how much they truly care about being tracked online. Right now, it is very hard to gauge what people would do if they had useful information about online tracking and data sharing, so I think that more effective notice is a necessary first step.

An example I give at the end of the paper is with Instagram. They changed their terms of service in a way that suggested they were now claiming the right to sell their users’ photos. There was a real uprising. People left, there was a lot of publicity, and Instagram absolutely changed policies based on that. So, I think attention to the issue does spur change.

My article calls for additional legislation that would enable the FTC to mandate uniform, nutrition-label style privacy policies. Because right now, as long as each company can write whatever endlessly-long policy they want, it’s really hard to get to a place where notice can be communicated concisely and effectively. I don’t believe the FTC has the authority to mandate this by themselves right now. So legally we need that sort of enabling legislation, and more broadly I think it would help for Congress to take up the issue and bring more attention to it.

In your research, have you noticed peoples’ behavior shifting as they discover that they have less privacy than they think?

RL: There’s a lot of speculation about that. And there’s surely a lot people who’ve changed their online habits, particularly after the Edward Snowden revelations –in my article I mention 70% or so people report some change in their behavior. But that could be anything. Maybe they’re just deleting their cookies a couple times. In most cases, even though people know more of their data is being shared online, they don’t change. They’re resigned about it. The fear can be very amorphous – again, whether it be privacy or security or a combination of the two, they don’t know if they’re afraid of government or private companies – they just don’t know what to do to protect themselves. They’re not experts, and they don’t have hours to think about this every day. So they generally don’t do much. I think it would take more visible changes in the privacy landscape for people to really change their behavior.