Zittrain and Zuckerberg discuss encryption, ‘information fiduciaries’ and targeted advertisements


JONATHAN ZITTRAIN:
So, thank you, Mark, for coming to talk to
me and to our students from the Techtopia program and
from my Internet and Society course at Harvard Law School. We’re really pleased to
have a chance to talk about any number of issues. And we should just
dive right in. So, privacy, autonomy, and
information fiduciaries. MARK ZUCKERBERG: All right. JONATHAN ZITTRAIN: I’d
love to talk about that. MARK ZUCKERBERG: Yeah. I read your piece in
The New York Times. JONATHAN ZITTRAIN: The
one with the headline that said “Mark Zuckerberg
Can Fix This Mess”? MARK ZUCKERBERG: Yeah. [LAUGHTER] Although, that was last year. JONATHAN ZITTRAIN: That’s true. Are you suggesting
it’s all fixed? MARK ZUCKERBERG: No. No. JONATHAN ZITTRAIN: OK, good. MARK ZUCKERBERG: I’m
suggesting that I’m curious whether you still think
that we can fix this mess. JONATHAN ZITTRAIN: Hope– [LAUGHTER] Hope springs eternal– MARK ZUCKERBERG: There you go. JONATHAN ZITTRAIN:
–is my motto. So all right. Let me give a quick
characterization of this idea that– the coinage and the
scaffolding for it is from my colleague,
Jack Balkin, at Yale. The two of us have been
developing it out further. There are a standard
number of privacy questions with which you might have
some familiarity having to do with people
conveying information that they know they’re
conveying or they’re not so sure they are. But mouse droppings, as
we used to call them, when they run in the rafters of
the internet and leave traces. And then, the standard way
of talking about that is you want to make sure that
that stuff doesn’t go where you don’t want it to go. And I call that
informational privacy. We don’t want people to know
stuff that we want maybe our friends only to know. And on a place like
Facebook, you’re supposed to be able
to tweak your settings and say, give them to
this and not to that. But there’s also
ways in which stuff that we share with
consent could still sort of be used against us. And it feels like,
well, you consented may not end the discussion. And the analogy that my
colleague Jack brought to bear was one of a doctor
and a patient or a lawyer and a client
or sometimes in America, but not always, a financial
advisor and a client that says that those
professionals have certain expertise. They get trusted with all
sorts of sensitive information from their clients and patients. And so they have an
extra duty to act in the interests
of those clients, even if their own
interests conflict. And so maybe just one quick
hypo to get us started– I wrote a piece in
2014 that maybe you read that was a hypothetical
about elections in which it said, just hypothetically,
imagine that Facebook had a view about which
candidate should win, and they reminded
people likely to vote for the favored candidate
that it was election day. And to others, they
simply set a cat photo. Would that be wrong? And I find– I have no idea if it’s illegal. It does seem wrong to me. And it might be that
the fiduciary approach captures what makes it wrong. MARK ZUCKERBERG: All right. So I think we could probably
spend the whole next hour just talking about that. So I read your op-ed, and I
also read Balkin’s blog post on information fiduciaries. And I’ve had a
conversation with him, too. JONATHAN ZITTRAIN: Great. MARK ZUCKERBERG: And the– at first blush kind of
reading through this, my reaction is there’s
a lot here that makes. The idea of us having a
fiduciary relationship with the people who
use our services is kind of intuitively– it’s
how we think about building what we’re building, right? So reading through this,
it’s like, all right. A lot of people seem to have
this mistaken notion that when we’re putting
together a news feed and doing ranking that
we have a team of people who are focused on maximizing
the time that people spend. But that’s actually not
the goal that we give them. We tell people on the
team, produce the service that we think is going to be
the highest quality We try to ground it in kind of getting
people to come in and tell us the content that we
can potentially show, what is going to be– they tell us what
they want to see. And then, we build models
that kind of can predict that and build that service. JONATHAN ZITTRAIN: And by the
way, was that always the case, or was that a pace you
got to through some course adjustments? MARK ZUCKERBERG: Through
course adjustments. I mean, you start off using
simpler signals, like what people are clicking on in feed. But then, you pretty
quickly learn, hey, that gets you to a
local optimum, right, where if you’re focusing
on what people click on and predicting what
people click on, then you select for clickbait, right? So pretty quickly, you
realize from real feedback, from real people that’s not
actually what people want. You’re not going to build the
best service by doing that. So you bring in people and
actually have these panels of– we call it getting
to ground truth. You show people
all the candidates for what can be shown to them. And you have people
say, what’s the most meaningful thing that I
wish that this system were showing us? So all this is kind
of a way of saying that our own self-image of
ourselves and what we’re doing is that we’re acting
as fiduciaries and trying to build the
best services for people. Where I think that this
ends up getting interesting is then the question of who gets
to decide in the legal sense or the policy sense of what’s
in people’s best interest. So we come in every day
and think, hey, we’re building a service where we’re
ranking news feeds trying to show people the
most relevant content with an assumption
that’s backed by data that, in general, people
want us to show them the most relevant content. But at some level, you
could ask the question, which is who gets to decide
that ranking news feed or showing relevant ads
or any of the other things that we choose to work on are
actually in people’s interest? And we’re doing the
best that we can to try to build a service
that we think are the best. At the end of the
day, a lot of this is grounded in people choose to
use it because clearly, they’re getting some value from it. But then, there are all these
questions, like you say, about you have– about where people
can effectively give consent and not. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: So
I think that there’s a lot of interesting
questions to unpack about how you’d actually
implement a model like that. But at a high level, I think– one of the things that I
think about in terms of– and we’re running
this big company. It’s important in
society that people trust the institutions of society. Clearly, I think
we’re in a position now where people rightly
have a lot of questions about big internet companies,
Facebook, in particular. And I do think getting
to a point where there’s the right regulation
and rules in place just provides a kind of
societal guardrail framework where people can
have confidence that, OK, these companies are
operating within a framework that we’ve all agreed. That’s better than them just
doing whatever they want. And I think that that would
give people confidence. So figuring out what that
framework is, I think, is a really important thing. And I’m sure we’ll
talk about that as it relates to a lot of
the content areas today. But getting to that
question of how do you– who determines what’s
in people’s best interest, if not
people themselves, is a really
interesting question. JONATHAN ZITTRAIN: Yes. So we should surely
talk about that. So on our agenda is the
who decides question. Other agenda items include– just as you say, the fiduciary
framework sounds nice to you. Doctors. Patients. Facebook users. And I hear you saying that’s
pretty much where you’re wanting to end up, anyway. There are some
interesting questions about what people want versus
what they want to want. People will say on
January 1st, what I want– New Year’s resolution–
is a gym membership. And then, on January 2, they
don’t want to go to the gym. They want to want
to go to the gym, but they never quite make it. And then, of course,
a business model of pay for the whole
year ahead of time, and they know you’ll never
turn up develops around that. And I guess a
specific area to delve into for a moment on that
might be on the advertising side of things. Maybe the dichotomy
between personalization, and does it ever go
into exploitation? Now, there might be stuff– I know Facebook,
for example, bans payday loans as best it can. That’s just a substantive area
that it’s like, all right. We don’t want to do that. But when we think about
good personalization so that Facebook knows I
have a dog and not a cat, and then Target can then offer
me dog food and not cat food, how about, if not now, a future
day in which an advertising platform can offer to an
ad targeter some sense of I just lost my pet? I’m really upset. I’m ready to make
some snap decisions that I might regret later. But when I make them,
I’m going to make them, so this is the perfect time
to tee up a cubic zirconia, or whatever the thing is. That seems to me a fiduciary
approach would say, ideally– how we get there, I don’t know. But ideally, we wouldn’t
permit that kind of approach to somebody using
the information we’ve gleaned from them to know
they’re in a tough spot and then to exploit them. But I don’t know. I don’t know how you would
think about something like that. Could you right an algorithm
to detect something like that? MARK ZUCKERBERG: Well, I think
one of the key principles is that we’re trying to run
this company for the long term. And I think that people think
that a lot of things that– if you were just
trying to optimize the profits for next quarter
or something like that, you might want to do
things that people might like in the near term,
but over the long term will come to resent. But if you actually care
about building a community and achieving this
mission and building the company for the
long term, I think you’re much more aligned
than people often think companies are. And it gets back to the
idea before where I think our self-image is
largely acting in this kind of fiduciary relationship,
as you’re saying, which is– we could go through a lot
of different examples. I mean, we don’t want
to show people content that they’re going to click
on and engage with, but then feel like they wasted
their time afterwards. We don’t want to
show them things that they’re going to make
a decision based off of that and then regret later. There’s a hard balance
here, which is– I mean, if you’re
talking about what people want to want versus
what they want, often, people’s
revealed preferences of what they actually
do shows a deeper sense of what they
want than what they think they want to want. So I think that there’s
an– so I think that there’s a question between when
something is exploitative versus when something
is real, but isn’t what you would say that you want. And that’s a really
hard thing to get at. But, on a lot of these
cases, my experience of running the
company is that you start off building a system. You have relatively
unsophisticated signals to start. And you build up increasingly
complex models over time that try to take into account
more of what people care about. And, I mean, there
are all these examples that we can go through. I think probably
news feed and ads are probably the two most
complex ranking examples that we have. But it’s– like we were
talking about a second ago, when we start off
with the systems– just start with news feed, but
you could do this on ads, too. The most naive signals
are what people click on, what people like. But then, you just very quickly
realize that that doesn’t– it approximates
something, but it’s a very crude approximation
of the ground truth of what people actually care about. So what you really
want to get to is, as much as possible,
getting real people to look at the real
candidates for content and tell you in a
multi-dimensional way what matters to them and try to
build systems that model that. And then, you want to be kind
of conservative on preventing downside. So your example of
the payday loans– and when we’ve talked
about this in the past, you’ve put the
question to me of, how do you know when
a payday loan is going to be exploitative if
you’re targeting someone who is in a bad situation? And our answer is,
well, we don’t really know when it’s going
to be exploitative, but we think that the
whole category potentially has a massive risk of
that, so we just ban it. JONATHAN ZITTRAIN: Which makes
it an easy case, but yes. MARK ZUCKERBERG: Yes. Yes. And I think that
the harder cases are when there’s significant
upside and significant downside and you want to
weigh both of them. So, for example, once we
started putting together a really big effort
on preventing election interference, one of the
initial ideas that came up was, why don’t we just
ban all ads that relate to anything that is political? And then, OK. You pretty quickly get
into, what’s a political ad? The classic legal
definition is things that are around
elections and candidates, but that’s not actually
what Russia and other folks were primarily doing, right? A lot of the issues
that we’ve seen are around issue ads and
basically sowing division on what are social issues. So all right. I don’t think you’re going
to get in the way of people’s speech and ability to
promote and do advocacy on issues that they care about. But some of the
questions– all right. So then, what’s
the right balance of how do you make sure
that you’re providing the right level of controls
that people who aren’t supposed to be participating
in these debates aren’t, or that at
least you’re providing the right transparency? But I think we veered a little
bit from the original question. But the– but yeah. OK. So let’s get back to
where you were at. JONATHAN ZITTRAIN:
Well, here’s– I mean, this is a way
of maybe moving forward, which is a platform as
complete as Facebook is these days offers
lots of opportunities to shape what people see
and possibly to help them with those nudges that
it’s time to go to the gym or to avoid them from
falling into the depredations of a payday loan. And it is a question of
so long as the platform is in a position to
do it, does it now have an ethical obligation
to do it, to help people achieve the good life? And I worry that it is too
great a burden for any company to bear to have to figure
out, say, if not the perfect, the most reasonable news feed
for every one of the– how many– 2 and 1/2 billion active
users– something like that– MARK ZUCKERBERG: On that order. JONATHAN ZITTRAIN:
–all the time. And there might
be some ways that start a little bit to get into
engineering of the thing that would say, OK,
with all hindsight, are there ways to architect this
so that the stakes aren’t as high, aren’t as
focused, on just, gosh, is Facebook doing this right? It’s as if there were only one
newspaper in the whole world, or one or two. And it’s like, well, then,
what The New York Times chooses to put on its home page, if
it were the only newspaper, would have outsized importance. So, just as a technical
matter, a number of the students in
this room had a chance to hear from Tim Berners-Lee,
inventor of the world wide web. And he has a new idea for
something called Solid. I don’t know if
you’ve heard of Solid. It’s a protocol more
than it is a product, so there’s no car to
move off the lot today. But its idea is allowing people
to have the data that they generate as they motor
around the web end up in their own data locker. Now, for somebody like Tim,
it might mean literally in a locker under his desk. And he could wake
up until the night and see where his data is. For others, it might be in Iraq
somewhere guarded, perhaps, by a fiduciary who’s
looking out for them the way that we put money in a bank. And then, we can
sleep at night knowing the bankers are– that’s maybe
not the best analogy in 2019. But blockchain. [LAUGHTER] MARK ZUCKERBERG:
We’ll get there. JONATHAN ZITTRAIN:
We’ll get there. But Solid says, if
you did that, people would then, or their helpful
proxies, be able to say, all right. Facebook is coming along. It wants the following data
from me, and including data that it has generated
about me as I use it, but stored back in my locker. And it kind of has to
come back to my well to draw water each time. And that way, if I want
to switch to Schmacebook or something, it’s
still in my well, and I can just immediately
grant permission to Schmacebook to see it, and I don’t have
to do a kind of data slurp and then reupload it. It’s a fully distributed
way of thinking about data. And I’m curious. From an engineering
perspective, does this seem doable with
something of the size and the number of spinning
wheels that Facebook has? And does it seem like a– I’m curious your reaction
to an idea like that. MARK ZUCKERBERG: So I think
it’s quite interesting. Certainly, the level of
computation that Facebook is doing and all of the
services that we’re building is really intense to do
in a distributed way. I mean, I think– as a basic model, I think we’re
building out the data center capacity over the
next five years and our plan for what we think
we need to do that we think is on the order of
what AWS and Google Cloud are doing for supporting
all of their customers. OK. So this is a relatively
computationally intense thing. Over time, you assume you
will get more computes, so decentralized
things, which are less efficient computationally,
will be harder. Sorry. They’re harder to
do computation on. But eventually, maybe you
have the compute resources to do that. I think the more
interesting questions there are not feasibility
in the near term, but the philosophical questions
of the goodness of a system like that. So one question,
if you want to– so we can get into
decentralization. One of the things that I’ve
been thinking about a lot is a use of blockchain that I
am potentially interested in, although I haven’t figured out
a way to make this work out, is around authentication and
basically granting access to your information
to different services, so basically replacing
the notion of what we have with Facebook
Connect with something that’s fully distributed. JONATHAN ZITTRAIN: Do you want
to log in with your Facebook account is the status quo. MARK ZUCKERBERG: Basically,
you take your information. You store it on some
decentralized system. And you have the choice
of whether to log in to different
places, and you’re not going through an
intermediary, which is kind of like what you’re
suggesting here, in a sense. OK. Now, there’s a lot of
things that I think would be quite attractive about that. For developers,
one of the things that is really
troubling about working with our system or Google
Assistant, for that matter, or having to deliver your
services through Apple’s app store is you don’t want to have
an intermediary between serving the people who are
using your service and you where someone
can just say, hey, we as a developer have
to follow your policy. And if we don’t, then you can
cut off access to the people we’re serving. That’s kind of a difficult and
troubling position to be in. So I think developers– JONATHAN ZITTRAIN:
I think you’re referring to a recent incident. MARK ZUCKERBERG: No. Well, I was– well, sure. But I think it underscores the– I think every developer
probably feels this. People are using any app store,
but also log in with Facebook, with Google, any
of these services. You want a direct relationship
with the people you serve. Now, OK. But let’s look at the flip side. So what we saw in the
last couple of years with Cambridge
Analytica was basically an example where people
chose to take data that– some of it was their data. Some of it was data that they
had seen from their friends, right? Because if you want to
do things like making it so alternative services can
build a competing news feed, then you need to
be able to make it so that people can
bring the data that they see within the system. OK. So basically, people
chose to give their data to a developer
that was affiliated with Cambridge
University, which is a really respected institution. And then, that developer
turned around and sold the data to the firm Cambridge
Analytica, which is in violation of our policies. So we cut off the
developer’s access. And, of course, in a
fully distributed system, there would be no one who could
cut off the developer’s access. So the question is, if you have
a fully distributed system, it dramatically empowers
individuals on the one hand, but it really raises the stakes. And it gets to your
questions around, well, what are the
boundaries on consent and how people can
really actually effectively know that
they’re giving consent to an institution? In some ways, it’s a lot
easier to regulate and hold accountable large companies
like Facebook or Google because they’re more
visible, they’re more transparent than
the long tail of services that people would choose to
then go interact with directly. So I think that this is a really
interesting social question. To some degree,
I think this idea of going in the direction
of blockchain authentication is less gated on the technology
and capacity to do that. I think if you were doing
fully decentralized Facebook, that would take
massive computation. But I’m sure we could
do fully decentralized authentication if we wanted to. I think the real question
is, do you really want that? JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: And I think
you’d have more cases where, yes, people would be able
to not have an intermediary, but you’d also have
more cases of abuse, and the recourse
would be much higher. JONATHAN ZITTRAIN: Yes. I mean, what I hear
you saying is people, as they go about
their business online, are generating data
about themselves that’s quite valuable, if
not to themselves, to others who might
interact with them. And the more they are empowered,
possibly through a distributed system, to decide
where that data goes and with whom they
want to share it, the more they could be
exposed to exploitation. This is a genuine
dilemma because I’m a huge fan of decentralization. MARK ZUCKERBERG: Yeah. JONATHAN ZITTRAIN: But
I also see the problem. And maybe one answer is there’s
some data that’s just so toxic, there’s no vessel
we should put it in. It might eat a hole
through it or something, metaphorically speaking. But then again, innocuous
data can so quickly be assembled into something
scary, so I don’t know– MARK ZUCKERBERG: And I think
that’s what we’re saying. I mean, I think
in general, we’re talking about the large
scale of data being assembled into meaning something different
from what the individual data points mean. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: And
I think that that’s the whole challenge here. But I philosophically
agree with you that– I want to think about– I do think about
the work that we’re doing as a decentralizing
force in the world. And a lot of the reason why I
think people of my generation got into technology
is because we believe that technology gives
individuals power and isn’t massively centralizing. Now, you’ve built a bunch of
big companies in the process. But I think what
has largely happened is that individuals today
have more voice, more ability to affiliate with
who they want and stay connected with people, ability
to form communities in ways that they couldn’t before. I think that that’s massively
empowering to individuals. And that’s philosophically
kind of the side that I tend to be on. So that’s why I’m
thinking about going back to decentralized or
blockchain authentication. That’s why I’m kind of
bouncing around how could you potentially make this work,
because my orientation is to try to go in that direction. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG:
An example where I think we’re generally
a lot closer to going in that direction is encryption. I mean, this is, I think,
one of the really big debates today, is basically
where the boundary is on where you would want
a messaging service to be encrypted. And there are all these benefits
from a privacy and security perspective. But on the other hand of
what we’re trying to do, one of the big issues
that we’re grappling with is content governance
and where’s the line between free
expression and, I suppose, privacy on one side,
but safety on the other. People do really bad things,
right, some of the time. And I think people rightfully
have an expectation of us that we’re going
to do everything we can to stop terrorists
from recruiting people or people from
exploiting children or doing different things. And moving in the direction
of making these systems more encrypted certainly
reduces some of the signals that we would have access
to be able to do some of that really important work. But here we are. We’re sitting in
this position where we’re running WhatsApp,
which is the largest end-to-end encrypted
service in the world. We’re running messenger, which
is another one of the largest messaging systems in the world,
where encryption is an option, but it isn’t the default. I don’t think
long-term, it really makes sense to be running
different systems with very different policies on this. I think this is sort of a
philosophical question where you want to figure out
where you want to be on it. So here’s– so my
question for you, and then I’ll talk about how
I’m thinking about this, is, all right. If you were in my
position and you got to– flip a switch is
probably way too glib because there’s a lot of
work that goes into this– and go in one direction
for both of those services, how would you think about that? JONATHAN ZITTRAIN:
Well, the question you’re putting on the
table, which is a hard one, is is it OK– and let’s just take the
simple case– for two people to communicate with
each other in a way that makes it difficult for any third
party to casually listen in? Is that OK? And I think the way we
normally answer that question is kind of a form of what you
might call status quoism, which is not satisfying. It’s whatever has been the case
is what should stay the case. And so, for WhatsApp, it’s like
right now, as I understand it– you could correct
me if I’m wrong– is pretty hard to get into. MARK ZUCKERBERG: It’s
fully end-to-end encrypted. JONATHAN ZITTRAIN: Right. So Facebook gets
handed a subpoena or a warrant or something from
name your favorite country. And you’re just like,
thank you for playing. We have nothing for you. MARK ZUCKERBERG: Oh, yeah. Yeah. We’ve had employees
thrown in jail because we have gotten
court orders that we have to turn over data that we
wouldn’t probably, anyway, but we can’t because
it’s encrypted. JONATHAN ZITTRAIN: Yes. And then, on the other hand– and this is not as clean
as it could be in theory– Messenger is sometimes
encrypted, sometimes not. If it doesn’t happen to have
been encrypted by the users, then that subpoena could work. And, more than that, there
could start to be some automated systems, either on
Facebook’s own initiative or under pressure
from governments in the general case,
not a specific warrant, to say, hey, if the
following phrases appear, if there’s some
telltale that says, this is somebody going after
a kid for exploitation, it should be forwarded up. If that’s already
happening and we can produce X number of people
who have been identified and a number of crimes
averted that way, who wants to be the person
to be like, lock it down? We don’t want any more of that. But I guess to put myself now to
your question, when I look out over years rather than
just weeks or months, the ability to casually peek
at any conversation going on between two people or
among a small group of people, or even to have a machine do
it for you so you can just set your alert list, crudely
speaking, and get stuff back– it’s always trite to
call something Orwellian, but it makes Orwell
look like a piker. I mean, it seems
like a classic case where the next
sentence would be, what could possibly go wrong? And we can fill that in. And it does mean,
though, I think, that we have to
confront the fact that, if we choose to allow that
kind of communication, then there’s going
to be crimes unsolved that could have been solved. There’s going to be
crimes not prevented that could have been prevented. And the only thing that kind
of blunts it a little is it is not really all or nothing. The modern surveillance
states of note in the world have a lot of arrows
in their quivers. And just being able to
darken your door and demand surveillance of a
certain kind, that might be a first thing
they would go to, but they’ve got a plan B,
a plan C, and a plan D. And I guess it really gets
to what’s your threat model. If you think everybody
is kind of a threat– think about the battles
of copyright 15 years ago. Everybody is a
potential infringer. All they have to do
is fire up Napster. Then, you’re wanting
some massive technical infrastructure to
prevent the bad thing. If what you’re
thinking is instead there are a few
really bad apples, and they tend to, when they
congregate online or otherwise with one another– tend to identify themselves. And then, we might
have to send somebody near their house to listen
with a cop at the window, metaphorically speaking. That’s a different threat
model and might not need it. Is that getting to an
answer to your question? MARK ZUCKERBERG: Yeah. And I think I generally agree. I mean, I’ve already
said publicly that my inclination is
to move these services in the direction of
being all encrypted, at least the private
communication version. I basically think, if you want
to kind of talk in metaphors, messaging is like people’s
living room, right? And I think we
definitely don’t, I think, want a
society where there’s a camera on
everyone’s living room watching the content
of those conversations. JONATHAN ZITTRAIN:
Even as we’re now– I mean, it is 2019. People are happily putting
cameras in their living rooms. MARK ZUCKERBERG: But
that’s their choice. But I guess they’re putting
cameras in their living rooms, well, for a number of reasons. JONATHAN ZITTRAIN: And
Facebook has a camera that can go in your living room. [LAUGHTER] I just want to be clear. [LAUGHTER] MARK ZUCKERBERG: Yeah. Although, that would
be encrypted in a– [LAUGHTER] JONATHAN ZITTRAIN: Encrypted
between you and Facebook. MARK ZUCKERBERG: No, no. I think– JONATHAN ZITTRAIN: But it also–
doesn’t it have a little Alexa functionality, too? MARK ZUCKERBERG: Well,
Portal works over Messenger. So if we go towards
encryption on Messenger, then that’ll be fully
encrypted, which I think, frankly, is probably
what people want. The other model
besides the living room is the town square, right? And that, I think, just
has different social norms and different policies
and norms that should be at play around that. But I do think that these things
are very different, right? You’re not going to– you may end up in a
world where the town square is a fully decentralized
or fully encrypted thing. But it’s not clear
what value there is in encrypting something
that’s public content, anyway, or very, very broad. JONATHAN ZITTRAIN: But
now, you were put to it pretty hard in that,
as I understand it, there’s now a change
to how WhatsApp works that there’s only
five forwards permitted on something. MARK ZUCKERBERG: So this is
a really interesting point, right? So when people talk about how
encryption will darken some of the signals that
we’ll be able to use, both for potentially
providing better services and for preventing
harm, one of the, I guess somewhat
surprising to me, findings of the
last couple of years of working on content
governance and enforcement is that it often is much
more effective to identify fake accounts and
bad actors upstream of them doing something
bad by patterns of activity rather than looking
at the content. JONATHAN ZITTRAIN:
So-called metadata. MARK ZUCKERBERG: Sure. JONATHAN ZITTRAIN: I don’t
know what they’re saying, but here’s who they’re
calling kind of thing. MARK ZUCKERBERG: Yeah. Or just like this
account doesn’t seem to really act like a person. And I guess as AI
gets more advanced and you build these
adversarial networks or generalized adversarial
networks, you’ll get to a place where you have AI
that can probably more effectively mimic– JONATHAN ZITTRAIN:
Go undercover. Act like a person for a while. MARK ZUCKERBERG: But,
at the same time, you’ll be building
up contrary AI on the other side that
is better at identifying AIs that are doing that. But this has certainly been
the most effective tactic across a lot of the
areas where we’ve needed to focus on preventing harm. The ability to
identify fake accounts, which a huge amount of the– under any category of issue
that you’re talking about, a lot of the issues
downstream come from fake accounts or
people who are clearly acting in some malicious
or not normal way. You can identify a lot of
that without necessarily even looking at the content itself. And if you have to look
at a piece of content, then, in some cases,
you’re already late because the content
exists, and the activity has already happened. So that’s one of the
things that makes me feel like encryption for
these messaging services is really the right
direction to go. Because it’s a very
pro-privacy and pro-security move to give people that
control and assurance. And I’m relatively confident
that, even though you are losing some tools on
the finding harmful content side of the ledger, I don’t
think at the end of the day that those are going
to end up being the most important tools
for finding the most of the helpful content. JONATHAN ZITTRAIN: But
now, connect it up quickly to the five forwards thing. MARK ZUCKERBERG: Oh, yeah. Sure. So that gets down
to if you’re not operating on a piece
of content directly, you need to operate on patterns
of behavior in the network. And what we basically
found was there weren’t that many good uses
for people forwarding things more than five times, except to
basically spam or blast stuff out. It was being
disproportionately abused. So you end up thinking
about different tactics when you’re not operating
on content specifically. You end up thinking about
patterns of usage more. JONATHAN ZITTRAIN:
Well, spam, I get. And I’m always in favor of
things that reduce spam. However, you could also say
the second category was just to spread content. You could have the classic– I don’t know– Les Mis or Paul Revere’s
ride or Arab Spring ask in the romanticized
vision of it, gosh, this is a way for
people to do a treaty and pass along the message
that you can’t stop the signal, to use a Joss Whedon reference. You really want to
get the word out. This would obviously
stop that, too. MARK ZUCKERBERG: Yeah. And then, I think
the question is you’re just weighing whether you
want this private communication tool where the vast majority
of the use and the reason why it was designed was the
vast majority is one-on-one. There’s a large amount of groups
that people communicate in, too. But it’s a pretty small edge
case of people operating this with you have a lot
of different groups, and you’re trying to
organize something and almost hack public content type or
public sharing type utility into an encrypted space. And, again, there, I think
you start getting into is this the living room or
is this the town square? And when people
start trying to use tools that are
designed for one thing to get around what I think the
social norms are for the town square, that’s when
I think you probably start to have some issues. This is not– we’re not done
addressing these issues. There’s a lot more to
think through on this. But that’s the general
shape of the problem that at least I perceive from
the work that we’re doing. JONATHAN ZITTRAIN: Well,
without any particular segue, let’s talk about fake news. So insert your
favorite segue here. There is some choice, or at
least some decision that gets made, to figure out what’s
going to be next in my news feed when I scroll up a little more. And, in the last
conversation bit, we were talking
about how much we’re looking at content versus
tell tales and metadata, things that surround
the content. For knowing about what that
next thing in the news feed should be, is it a valid,
desirable, material consideration, you think, for a
platform like Facebook to say, is the thing we are
about to present true, whatever true means? MARK ZUCKERBERG: Well, yes. Because, again, getting
at trying to serve people, people tell us that they don’t
want fake content, right? I mean, I don’t know anyone
who wants fake content. I think the whole issue is,
again, who gets to decide? So broadly speaking, I don’t
know any individual who would sit there and
say, yes, please show me things that you know
are false and that are fake. People want good quality
content and information. That said, I don’t really
think that people want us to be deciding what is true for them. And people disagree
on what is true. And the truth is, there
are different levels when someone is telling a story. Maybe the meta arc is talking
about something that is true, but the facts that
were used in it are wrong in some nuanced
way, but it speaks to some deeper experience. Well, is that true or not? And do people want
that disqualified from being shown to them? I think different
people are going to come to different places on this. So I’ve been very
sensitive, which on– we really want to
make sure that we’re showing people high quality
content and information. We know that people don’t
want false information. So we’re building
quite advanced systems to be able to make
sure that we’re emphasizing and
showing stuff that is going to be high quality. But the big question is,
where do you get the signal on what the quality is? So the kind of
initial V1 of this was working with third
party fact checkers. I believe very strongly that
people do not want Facebook and that we should
not be the arbiters of truth in deciding what
is correct for everyone in society. I think people already
generally think that we have too much power in
deciding what content is good. I tend to also be
concerned about that. And we should talk about
some of the governance stuff that we’re working
on separately to try to make it so that we can bring
more independent oversight into that. But let’s put that
in a box for now and just say that with
those concerns in mind, I’m definitely
not looking to try to take on a lot more in terms
of also deciding, in addition to enforcing all the
content policies– also deciding what is true
for everyone in the world. OK. So the V1 of that is
we’re going to work with– JONATHAN ZITTRAIN:
Truth experts. MARK ZUCKERBERG: We’re
working with fact checkers. And they’re experts. And basically, there’s a whole
field of how you go and assess certain content. They’re accredited. People can disagree with the
leaning of some of these– JONATHAN ZITTRAIN:
Who does accreditation for fact checkers? MARK ZUCKERBERG: The Poynter
Institute for Journalism. JONATHAN ZITTRAIN: I’ll
apply for my certification. MARK ZUCKERBERG: You may. You’d probably get it. But you’d have to go
through the process. The issue there is there
aren’t enough of them. So there is a large content– obviously, a lot of information
is shared every day. And there just aren’t
a lot of fact checkers. So then, the question is,
OK, that is probably– JONATHAN ZITTRAIN:
But the portion– you’re saying the food is good. It’s just the
portions are small. But the food is god. MARK ZUCKERBERG: I
think in general. So you build systems,
which is what we’ve done, especially leading up to
elections, which I think are some of the most fraught
times around this, where people really are aggressively trying
to spread misinformation– you build systems that
prioritize content that seems like it’s going
viral because you want to reduce the prevalence of
how widespread this stuff gets. So, that way, the
fact checkers have tools to be able to prioritize
what they need to go– what they need to go look at. But it’s still getting to
a relatively small percent of the content. So I think the
real thing that we want to try to get to over
time is more of a crowd sourced model, where people– it’s not that
people are trusting some basic set of experts
who are accredited, but are in some kind of lofty
institution somewhere else. It’s like, do you trust– if you get enough data points
from within the community of people reasonably
looking at something and assessing it over
time, then the question is, can you compound that
together into something that is a strong enough signal
that we can then use that? JONATHAN ZITTRAIN: Kind of in
the old school like a Slashdot moderating system
with only the worry that if the stakes
get high enough, somebody wants to
AstroTurf that. MARK ZUCKERBERG: Yes. JONATHAN ZITTRAIN: I’d be– MARK ZUCKERBERG: There
are a lot of questions here, which is why I’m not
sitting here and announcing a new program. [LAUGHTER] But what I’m saying
is this is like– this is the general
direction that I think we should be thinking about. And I think that there
are a lot of questions. And we’d like to run
some tests in this area to see whether
this can help out, which would be upholding
the principles, which are that we want to stop
the spread of misinformation knowing that no one
wants misinformation, and the other principle, which
is that we do not want to be– JONATHAN ZITTRAIN:
Want to be the decider. Yes. MARK ZUCKERBERG: And I think
that that’s the basic– those are the basic contours,
I think, of that problem. JONATHAN ZITTRAIN: So
let me run an idea by you that you can
process in real time and tell me the eight
reasons I have not thought of why this is a terrible idea. And that would be
people see something in their Facebook feed. They’re about to share
it out because it’s got a kind of outrage factor to it. I think of the classic
story from two years ago in the Denver Guardian
about an FBI agent suspected in Hillary Clinton
email leak implicated in murder-suicide. I have just uttered fake news. None of that was true. If you clicked through
the Denver Guardian, there was just that article. There is no Denver Guardian. If you live in Denver,
you cannot subscribe. It is unambiguously fake. And it was shared more
times than the most shared story during the election
season of The Boston Globe. And so– MARK ZUCKERBERG: And this
is actually an example, by the way, of where
trying to figure out fake accounts is a much simpler
solution than trying to down– so– JONATHAN ZITTRAIN: If
newspaper has one article, wait for 10 more before you
decide they’re a newspaper. MARK ZUCKERBERG: Yeah. Or, I mean, there are
any number of systems that you could build
to basically detect, hey, this is a– JONATHAN ZITTRAIN: A Potemkin. MARK ZUCKERBERG: This
is a fraudulent thing. And then, you can
take that down. And that ends up being a much
less controversial decision because you’re doing
it upstream based on the basis of inauthenticity
in a system where people are supposed to be real and
represent that they’re their real selves than
downstream trying to say, hey, is this true or false? JONATHAN ZITTRAIN:
I made a mistake in giving you the easy
case, so I should not have used that example. You’re right, and you knocked
that one out of the park. And Denver Guardian, come up
with more articles and be real. And then, come back
and talk to us. So here’s the harder
case, which is something that might be in an outlet
that is viewed as legitimate, has a number of
users, et cetera, so you can’t use the
metadata as easily. Imagine if somebody,
as they shared it out, could say, by the way,
I want to follow this. I’m going to learn a
little bit more about this. They click a button
that says that. And I also realized when I
talked earlier with somebody at Facebook in this that adding
a new button to the home page is like everybody’s
first idea for something. MARK ZUCKERBERG: But it’s a
reasonable thought experiment, even though it– JONATHAN ZITTRAIN: Fair enough. MARK ZUCKERBERG: –would
lead to a very bad UI. JONATHAN ZITTRAIN: I
understand this is already in the land of fantasy. So you add the button. They say, I want to
follow up on this. If enough people are
clicking comparatively on the same thing to say, I
want to learn more about this. If anything else develops,
let me know, Facebook. That then– I have
my pneumatic tube. It then goes to a
convened virtually panel of three librarians. We go to the librarians
of the nation and the world at public
and private libraries across the land, who agree to
participate in this program. Maybe we set up a
little foundation for it that’s endowed permanently
and no longer connected to whoever endowed it. And those librarians
together discuss the piece. And they come back
with what they would tell a patron if somebody
came up to them and said, I’m about to cite this in
my social studies paper. What do you think? And librarians live for
questions like that. They’re like, well,
let us tell you. And they have a huge fiduciary
notion of patron duty that says, I may
disapprove of you even studying this or whatever,
but I’m here to serve you, the user. And I just think you should
know this is why maybe it’s not such a good source. And when they come up with
that, they can send it back. And it gets pushed
out to everybody who asked for follow up. And they could do
with it as they will. And last piece of the puzzle– we have high school students who
apprentice as librarian number three for credit. And then, they can get graded
on how well they participate in this exercise,
which helps generate a new generation of
librarian themed people who are better off
at reading things. MARK ZUCKERBERG: All right. Well, I think you
have a side goal here, which I haven’t been thinking
about, on the librarian thing– JONATHAN ZITTRAIN:
Which is the evil goal of promoting libraries. MARK ZUCKERBERG: Well, it’s– no. But, I mean– I think solving
preventing misinformation or spreading misinformation
is hard enough without also trying
to develop high school students in a direction. JONATHAN ZITTRAIN: My
colleague Charlie Nelson calls this solving a
problem with a problem. MARK ZUCKERBERG: All right. Well– JONATHAN ZITTRAIN:
But anyway, yes. MARK ZUCKERBERG:
So I actually think I agree with most of
what you have in there. It doesn’t need to be a
button on the home page. It can be– it turns out that
there’s so many people using these services that
even if you get– even if you put something
that looks like it’s not super prominent, like behind
the three dots on any given newsfeeds story, you
have the options, yeah, not everyone is going to– JONATHAN ZITTRAIN: If
1 out of 1,000 do it, you still get 10,000
or 100,000 people. MARK ZUCKERBERG:
But I actually think you can do even better, which
is it’s not even clear that you need that signal. I think that that’s
super helpful. I think really what matters
is looking at stuff that’s getting a lot of distribution. So I think that there’s
kind of this notion, I mean, going back to the encryption
conversation, which is if I say something
that’s wrong to you in a one-on-one
conversation, does that need to be fact checked? I mean, yeah, it would be good
if you got good– if you got the most accurate information. JONATHAN ZITTRAIN: I do have a
personal librarian to accompany me for most conversations, yes. MARK ZUCKERBERG: Well, you are– JONATHAN ZITTRAIN: Unusual. MARK ZUCKERBERG: Yes. [LAUGHTER] That’s the word I
was looking for. [LAUGHTER] JONATHAN ZITTRAIN: I’m
not sure I believe you. MARK ZUCKERBERG: But I
think that there’s limited– I don’t think anyone would
say that every message that goes back and forth, especially
in an encrypted messaging service, should be– JONATHAN ZITTRAIN: Fact checked. MARK ZUCKERBERG:
Should be fact checked. JONATHAN ZITTRAIN: Correct. MARK ZUCKERBERG: So I think the
real question is, all right. When something starts
going viral or getting a lot of distribution, that’s
when it becomes most socially important for it to– for it to be– have some level
of validation, or at least that we know that the
community in general thinks that this is
a reasonable thing. So it’s actually– well,
it’s helpful to have this signal of whether people
are flagging this as something that we should look at. I actually think
increasingly, you want to be designing
systems that just prevent alarming or sensational
content from going viral in the first place and making
sure that the stuff that is getting wide
distribution is doing so because it’s high quality on
whatever front you care about. JONATHAN ZITTRAIN:
And that quality is still generally from Poynter
or some external party that– MARK ZUCKERBERG: Well,
quality has many dimensions. But certainly, accuracy
is one dimension of it. I mean, you pointed out, I
think in one of your questions, is this piece of content
prone to incite outrage? If you don’t mind,
I’ll get to your panel of three things in a second. But as a slight
detour on this, one of the findings that has
been quite interesting is there’s this question about
whether social media in general basically makes it so that
sensationalist content gets the most distribution. And what we’ve found
is that, all right, so we’re going to have rules
about what content is allowed. And what we found is that
generally, within whatever rules you set up, as
content approaches the line of what is allowed, it
often gets more distribution. So you’ll have
some rule on what– take a completely
different example– on nudity policies,
where it’s like, OK, you have to define what
is unacceptable nudity in some way. As you get as close to that as
possible, it’s like, all right. This is maybe a
photo of someone– JONATHAN ZITTRAIN: The
skin to share ratio goes up until it gets banned,
at which point, it goes to zero. MARK ZUCKERBERG: Yes. OK. So that is a bad
property of a system that I think you want
to generally address. Or you don’t want– you don’t want to design
a community or systems for helping to build
a community where things that get as close
to the line as what is bad get the most distribution. JONATHAN ZITTRAIN: So long
as we have the premise, which, in many cases, is
true, but I could probably try to think of some
where it wouldn’t be true, that as you near the
line, you are getting worse. MARK ZUCKERBERG:
That’s a good point. That’s a good point. JONATHAN ZITTRAIN: It might
be humor that’s really edgy– MARK ZUCKERBERG: That’s true. JONATHAN ZITTRAIN: –and that
conveys a message that would be impossible to convey without
the edginess while not still– MARK ZUCKERBERG: That’s true. JONATHAN ZITTRAIN: Yeah. MARK ZUCKERBERG:
But then, you get the question of what’s the
cost benefit of allowing that? And obviously, where
you can, accurately separate what’s good and bad. Like in the case
of misinformation, I’m not sure you can
do it fully accurately, but you can try to build
systems that approximate that. There’s certainly
the issue, which is that there is
misinformation which leads to massive public harm, right? So if it’s misinformation
that is also spreading hate and leading to
genocide or public attacks, it’s like, OK, we’re
not going to allow that. That’s coming down. But then, generally, if you
say something that’s wrong, we’re not going to
try to block that. We’re just going to try
to not show it to people widely because people don’t
want content that is wrong. So then, the question
is, as something is approaching the line,
how do you assess that? This is a general theme in a
lot of the content governance and enforcement work
that we’re doing, which is there’s one piece of
this, which is just making sure that we can as
effectively as possible enforce the policies that exist. And there’s a whole
other stream of work, which I call borderline
content, which is basically this issue of, as content
approaches the line of being against the policies,
how do you make sure that that isn’t the
content that is somehow getting the most distribution? And a lot of the things that
we’ve done in the last year were focused on that problem. And it really improves the
quality of the service, and people appreciate it. JONATHAN ZITTRAIN:
So this idea would be stuff that you’re
kind of letting down easy without banning. And letting down
easy as it’s going to somehow have a coefficient
of friction for sharing that goes up. It’s going to be harder
for it to go viral. MARK ZUCKERBERG: Yeah. That’s fascinating because
it’s just against– you can take almost any
category of policy that we have. So I used nudity a second ago. Gore and violent imagery. Hate speech. Any of these things. I mean, there’s–
like hate speech, there’s content that you would
just say is mean or toxic but that did not violate– but you would not want to
have a society that banned being able to say that thing. But you don’t necessarily want
that to be the content that is getting the most distribution. JONATHAN ZITTRAIN: So here’s a
classic transparency question around exactly that
system you described. And when you described this– I think you did a post
around this a few months ago. This was fascinating. You had graphs in
the post depicting this, which was great. How would you feel about sharing
back to the person who posted or possibly to
everybody who encounters it its coefficient of friction? Would that freak people out? Would it be like, all right? And, in fact, they
would then probably start conforming their
posts for better or worse to try to maximize
the shareability. But that that rating is already
somewhere in there by design. Would it be OK to surface it? MARK ZUCKERBERG:
So, as a principle, I think that that would be good. But I don’t– the way that the
systems are designed isn’t that you get a score of how
inflammatory or sensationalist a piece of content is. The way that it
basically works is you can build classifiers
that identify specific types of things, right? So we’re going down the
list of, all right, there’s 20 categories of harmful content
that you’re trying to identify, everything from terrorist
propaganda on the one hand to self-harm issues
to hate speech and election interference. And basically, each
of these things, while it uses a lot of the same
underlying machine learning infrastructure, you’re doing
specific work for each of them. So if you go back to the
example on nudity for a second, you’re not necessarily
scoring everything on a scale of not at all nude to nude. You’re basically enforcing
specific policies. So you’re saying, OK. JONATHAN ZITTRAIN: So
by machine learning, it would just be, give me an
estimate of the odds by which, if a human looked at it who
was employed to enforce policy, whether it violates the policy. MARK ZUCKERBERG: And you have
a sense of, OK, this is– so what are the things that are
adjacent to the policy, right? So you might say, OK. Well, if the person
is completely naked, that is something that
you can definitely build a classifier to
be able to identify with relatively high accuracy. But even if they’re
not, then the question is you kind of need to be
able to qualitatively describe what are the things that
are adjacent to that. So maybe the person is
wearing a bathing suit and is in a sexually
suggestive position, right? It’s not like any
piece of content you’re going to score from
not at all nude to nude. But you kind of have the
cases for what you think are adjacent to the issues. And, again, you ground this. And qualitatively,
people might click on it. They might engage with it. But, at the end, they don’t
necessarily feel good about it. And you want to get at, when
you’re designing these systems, not just what people
do, but also you want to make sure
we factor in, too, is this the content that
people say that they really want to be seeing? JONATHAN ZITTRAIN: In
Constitutional law, there’s a formal
kind of definition that’s emerged for
the word prurient, if something appeals to
the prurient interest, as part of a definition of
obscenity, the famous Miller test, which is not a
beer oriented test. And part of a prurient interest
is basically it excites me, and yet, it completely
disgusts me. And it sounds like
you’re actually converging to the Supreme
Court’s vision of prurience with this. MARK ZUCKERBERG: Maybe. JONATHAN ZITTRAIN: And it
might be– don’t worry. I’m not trying to
nail you down on that. [LAUGHTER] But it’s very interesting
that machine learning, which you invoked, is both
really good, I gather, at something like this. It’s the kind of
thing that’s like, just have some people tell
me with their expertise, does this come near to
violating the policy or not? And I’ll just through
a spidey sense start to tell you
whether it would, rather than being
able to throw out exactly what the factors are. I know the person’s
fully clothed, but it’s still is going
to invoke that quality. So all of the benefits
of machine learning and all of, of
course, the drawbacks where it classifies something. And somebody is
like, wait a minute. That was me doing a parody
of blah, blah, blah. That all comes to the fore. MARK ZUCKERBERG: Yeah. And, I mean, when
you ask people what they want to see, in addition
to looking at what they actually engage with, you do get a
completely different sense of what people value,
and you can build systems that approximate that. But going back to your
question, I think, rather than giving people
a score of the friction, I think you can
probably give people feedback of, hey,
this might make people uncomfortable in this way– in this specific way. JONATHAN ZITTRAIN: It might
affect how much it gets– how much it gets shared. MARK ZUCKERBERG: And this
gets down to a different– there’s a different
AI ethics question, which I think is
really important here, which is designing AI systems
to be understandable by people. And, to some degree,
you don’t just want it to spit out a
score of how offensive or where it scores
on any given policy. You want it to be able to
map to specific things that might be problematic. And that’s the way
that we’re trying to design the systems overall. JONATHAN ZITTRAIN: Yes. Now, we have something parked
in the box we should take out, which is the external
review stuff. But before we do, one other
just transparency thing maybe to broach that basically
just occurred to me. I imagine it might
be possible to issue me a score of how much I’ve
earned for Facebook this year. It could simply say,
this is how much we collected on the basis
of you in particular being exposed to an ad. And then, sometimes,
people, I guess, might compete to
get the numbers up. But I’m just curious,
would that be a figure? I’d kind of be curious to know,
in part because it might even lay the groundwork of
being like, look, Mark, I’ll double it. You can have double the money. And then, don’t show me any ads. Can we get a car off
of that lot today? [LAUGHTER] MARK ZUCKERBERG: OK. Well, there’s a lot in there. JONATHAN ZITTRAIN: It
was a quick question. MARK ZUCKERBERG: So there’s a
question in what you’re saying, which is so we build
an ad supported system. Should we have an option for
people to pay to not see ads, I think is kind of
what you’re saying. I mean, just as the basic primer
from first principles on this, we’re building a service. We want to give
everyone a voice. We want everyone to
be able to connect with who they care about. If you’re trying to build
a service for everyone– JONATHAN ZITTRAIN:
It’s got to be free. MARK ZUCKERBERG: You want it to
be as affordable as possible. JONATHAN ZITTRAIN: That’s
just going to be the argument. Yes. MARK ZUCKERBERG: So this is
kind of a tried and true thing. There are a lot of
companies over time that have been ad supported. In general, what
we find is that, if people are going to see ads,
they want them to be relevant. They don’t want them to be junk. So then, within
that, you give people control over how their data
is used to show them ads. But the vast majority
of people say, show me the most
relevant ads that you can because I get that
I have to see ads. This is a free service. So now, the question is– all right. So there’s a whole set
of questions around that that we can get into, but– JONATHAN ZITTRAIN: For
which we did talk about– we don’t have to reopen
it– the personalization exploitation or even just
philosophical question. Right now, Uber or Lyft
are not funded that way. We could apply this ad
model to Uber or Lyft. Free rides. Totally free. It’s just every fifth ride
takes you to Wendy’s and idles outside the
drive-through window. Totally up to you
what you want to do, but you’re going to
sit here for a while. And then, you go on your way. I don’t know how we– status quoism would
probably say people would have a problem
with that, but it would give people rides that
otherwise wouldn’t get rides. MARK ZUCKERBERG: I have
not thought about that case in their business. JONATHAN ZITTRAIN:
Well, that’s my patent, dammit, so don’t you steal it. MARK ZUCKERBERG:
Certainly, some services, I think, tend themselves
better towards being ad supported than others. JONATHAN ZITTRAIN: OK. MARK ZUCKERBERG: OK. And I think generally,
information-based ones tend to– JONATHAN ZITTRAIN: Then,
my false imprisonment hypo. OK. Fair enough. MARK ZUCKERBERG: That seems– there might be
more issues there. But OK. But go to the
subscription thing. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: When people
have questions about the ad model on Facebook, I don’t
think the questions are just about the ad model. I think they’re
about both seeing ads and data use around ads. And the thing that I think– so when I think about
this, I don’t just think you want to let
people pay to not see ads because I actually think
then, the question is– the questions are
around ads and data use. And I don’t think
people are going to be that psyched about
not seeing ads, but then, not having different controls
over how their data is used. OK, but now, you start getting
into a principle question, which is, are we
going to let people pay to have different controls
on data use than other people? And my answer to that
is a hard no, right? So the prerequisite– JONATHAN ZITTRAIN: What’s
an example of data use that isn’t ad based, just so we
know what we’re talking about? MARK ZUCKERBERG:
That isn’t ad based. What do you mean? JONATHAN ZITTRAIN:
You were saying, I don’t want to see ads. But you’re saying that’s kind
of just the wax on the car. What’s underneath is
how the data gets used. MARK ZUCKERBERG: Let me keep
going with this explanation. And then I think
this will be clear. So one of the things that
we’ve been working on is this tool that we
call clear history. And the basic idea
is you can kind of analogize it to a
web browser where you can clear your cookies. That’s kind of a normal thing. You know that when you
clear your cookies, you’re going to get logged
out of a bunch of stuff. A bunch of stuff might
get more annoying. JONATHAN ZITTRAIN: Which
is why my guess is– am I right? Probably nobody
clears their cookies. MARK ZUCKERBERG: I don’t know. JONATHAN ZITTRAIN:
They might use incognito mode or something. MARK ZUCKERBERG: I don’t know. How many of you guys clear your
cookies every once in a while? JONATHAN ZITTRAIN: This is not
a representative group, dammit. MARK ZUCKERBERG: OK. [LAUGHTER] Maybe once a year or something,
I’ll clear my cookies. But no– JONATHAN ZITTRAIN:
Happy new year. Clear your cookies. MARK ZUCKERBERG: For
some period of time. JONATHAN ZITTRAIN: OK. Fair enough. MARK ZUCKERBERG: But not
necessarily every day. But it’s important that
people have that tool, even though it might,
in a local sense, make their experience worse. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: OK. So that kind of content of what
different services, websites, and apps send Facebook
that we use to help measure the ads and
effectiveness there. So things like if
you’re an app developer and you’re trying to pay for
ads to help grow your app, we want to only charge
you when we actually– when something that we
show leads to an install, not just whether someone
sees the ad or clicks on it. JONATHAN ZITTRAIN: That requires
a whole infrastructure to– MARK ZUCKERBERG: Yeah. So you build that out. It helps us show people
more relevant ads. It can help show more
relevant content. Often, a lot of these signals
are super useful, also, on the security side for some
of the other things that we’ve talked about. So that ends up being important. But fundamentally, looking
at the model today, it seems like you should have
something like this ability to clear history. It turns out that it’s a much
more complex technical project. I talked about this at
our developer conference last year, about how
I’d hope that we’d roll it out by the end of 2018. And just the plumbing
goes so deep into all of the different systems that– but we’re still working on it. We’re going to do it. JONATHAN ZITTRAIN: So
clear history basically means I’m as if a noob. I just show– even though
I’ve been using Facebook for a while, it’s as if
it knows nothing about me, and it starts accreting again. And I’m just trying to think,
just as a plain old citizen, how would I make an informed
judgment about how often to do that or when I should do it? MARK ZUCKERBERG: Well, hold on. Let’s go to that in a second. But one thing– just
to connect the dots on the last conversation. JONATHAN ZITTRAIN: Yeah. MARK ZUCKERBERG: Clear history
is a prerequisite, I think, for being able to do
anything like subscriptions. Because partially, what
someone would want to do, if they were going to
really actually pay for a not ad supported version
where their data wasn’t being used in a system like that– you would want to have a
control so that Facebook didn’t have access or
wasn’t using that data or associating it
with your account. And as a principled
matter, we are not going to just offer a control
like that to people who pay. If we’re going to give
controls over data use, we’re going to do that for
everyone in the community. So that’s the first thing
that I think we need to go do. So that’s kind of– this
is sort of how we’re thinking about the projects. And this is a really deep
and big technical project, but we’re committed
to doing it because I think it’s [INAUDIBLE]. JONATHAN ZITTRAIN: And I
guess like an ad blocker, somebody could then
write a little script for your browser that would just
clear your history every time you visit or something. MARK ZUCKERBERG: Oh, yeah. No. But the plan would also
be to offer something that’s an ongoing thing– JONATHAN ZITTRAIN: I see. MARK ZUCKERBERG:
–in your browser. But I think the analogy
here is you kind of have– in your browser, you have the
ability to clear your cookies. And then, in some
other place, you have under your
nuclear settings, don’t ever accept any
cookies in my browser. And it’s like, all right. Your browser is not really
going to work that well. But you can do that if
you want because you should have that control. I think that these are
part and parcel, right? I think a lot of people
might go and clear their history on a periodic
basis because they– or actually, in the
research that we’ve done on this as we’ve
been developing it, the real thing that people
have told us that they want is, similar to cookie
management, not necessarily wiping everything,
because that ends in an inconvenience
of getting logged out of a bunch of things, but
there are just certain services or apps that you don’t want
that data to be connected to your Facebook account. So having the ability
on an ad hoc basis to go through and say,
hey, stop associating this thing is going
to end up being a quite important
thing that I think we want to try to deliver. So this is partially, as
we’re getting into this– it’s a more complex thing. But I think it’s very valuable. And I think any conversation
around subscriptions, I think, you would want to start
with giving people these– make sure that everyone
has these kind of controls. We’re kind of in the early
phases of doing that. The philosophical
downstream question of whether you also let
people pay to not have ads– I don’t know. There are a bunch of questions
around whether that’s actually a good thing. But I personally don’t believe
that very many people would like to pay to not have ads. All of the research
that we have– it may still end up being
the right thing to offer that as a choice down the line. But all the data that
I’ve seen suggests that the vast, vast,
vast majority of people want a free service and that
the ads in a lot of places are not even that different
from the organic content in terms of the quality of what
people are being able to see. People like being able
to get information from local businesses and
things like that, too. So there’s a lot of good there. JONATHAN ZITTRAIN: Yeah. 40 years ago, it would have been
the question of ABC versus HBO. And the answer turned
out to be, yes. So you’re right that people
might have different things. There’s a little paradox
lingering in there about it’s something so
important and vital that we wouldn’t want to deprive
anybody of access to it. But therefore, nobody gets
it until we figure out how to remove it for everybody. MARK ZUCKERBERG:
What do you mean? JONATHAN ZITTRAIN:
In other words, if I could buy my way out
of ads and data collection, it wouldn’t be fair
to those who can’t. And therefore, we all subsist
with it until the advances you’re talking about. MARK ZUCKERBERG: Yeah. But I guess what I’m
saying is on the data use, I don’t believe that
that’s something that people should buy. I think the data
principles that we have need to be uniformly
available to everyone. That to me is a really
important principle. Maybe you could
have a conversation about whether you should be
able to pay to not see ads. That doesn’t feel like
a moral question to me. But the question
of whether you can pay to have different
privacy controls feels wrong. So that to me is something
that in any conversation about whether we devolve towards
having a subscription service, I think you have to have
these controls first. And that’s a very deep thing and
a technical problem to go do. But that’s why we’re
working through it. JONATHAN ZITTRAIN: So
long as the privacy controls that we’re
not able to buy our way into aren’t controls
that people ought to have. It’s just the kind of
underlying question of, is the system as it is that we
can’t opt out of a fair system? And that’s, of course– you have to go into the
details to figure out what you mean by it. But let’s, in the remaining
time we have left– MARK ZUCKERBERG: How
are we doing on time? JONATHAN ZITTRAIN: We’re good. We’re 76 minutes in. MARK ZUCKERBERG: All right. We’re going to get through
maybe half the topics. JONATHAN ZITTRAIN: Yeah. Yeah, yeah. We’re going to bring this
in for a landing soon. On my agenda left includes such
things as taking out of the box the independent review stuff. We’ll chat a little
bit about that. I’d be curious– and this
might be a nice thing, really, as we wrap up, which would
be a sense of any vision you have for what would Facebook
look like in 10 or 15 years. And how different would it
be than the Facebook of 10 years ago is compared to today? So that’s something
I’d want to talk about. Is there anything big on
your list that you want to make sure we talk about? MARK ZUCKERBERG: Those are good. Those are good topics. JONATHAN ZITTRAIN: Fair enough. Sorry. The external review board. MARK ZUCKERBERG: Yeah. So one of the big questions that
I’ve just been thinking about is we make a lot of decisions
around content enforcement and what stays up
and what comes down. And having gone through this
process over the last few years of working on the
systems, one of the themes that I feel really
strongly about is that we shouldn’t be making
so many of these decisions ourselves. Now, one of the ways that I
try to reason about this stuff is take myself out
of the position of being CEO of
the company, almost like a Rawlsian perspective. If I was a different
person, what would I want the CEO of the
company to be able to do? And I would not want so
many decisions about content to be concentrated
with any individual. JONATHAN ZITTRAIN: It is
weird to see big, impactful, to use a terrible
word, decisions about what a huge
swath of humanity does or doesn’t see
inevitably handled as a customer service issue. It does feel like
a mismatch, which is what I hear you saying. MARK ZUCKERBERG:
So I actually think the customer service analogy
is a really interesting one, right? So when you email Amazon
because they don’t– they make a mistake
with your package, that’s customer support, right? I mean, they are trying
to provide a service. And generally, they can invest
more in customer support and make people happier. We’re doing something
completely different. When someone emails us with an
issue or flags some content, they’re basically
complaining about something that someone else in
the community did. So it’s more like a– it’s almost more like a
court system, in that sense. Doing more of that
does not make people happy because in every
one of those transactions, one person ends up the
winner, and one is the loser. Either you said that
the content was fine, in which case, the person
complaining is upset, or you take someone’s
content down, in which case the person is really upset
because you’re now telling them that they don’t have the
ability to express something that they feel is a
valid thing that they should be able to express. So, in some deep sense, while
some amount of what we do is customer support– people get
locked out of their account, et cetera– we now have more
than 30,000 people working on content review
and safety review doing the kind of judgments that– basically, a lot
of the stuff– we have machine learning
systems that flag things that could be problematic,
in addition to people in the community
flagging things, but making these assessments
of whether the stuff is right or not. So one of the questions
that I just think about is like, OK, well, you
have any people doing this. Regardless of how much
training they have, we’re going to make mistakes. So you want to start
building in principles around what you would kind
of think of as due process. So we’re building in an ability
to have an appeal, which already is quite good in
that we are able to overturn a bunch of mistakes that
the first line people make in making these assessments. But, at some level,
I think you also want a level of kind of
independent appeal, where if, OK, let’s say– so the appeals go to maybe
a higher level of Facebook employee who’s a
little more trained in the nuances of the policies. But at some point,
I think you also need an appeal to an independent
group, which is like, is this policy fair? Is this piece of
content really getting on the wrong side of the balance
of free expression and safety? And I just don’t think
at the end of the day that that’s something
that you want centralized in a single company. So now, the question is, how
do you design that system? And that’s a real
question, right? So we don’t pretend to
have the answers on this. What we’re basically
working through is we have a draft proposal. We’re working with a lot
of experts around the world to run a few pilots in the
first half of this year that hopefully we can
codify into something that’s a longer term thing. But I just believe
that this is just an incredibly important thing. As a person, and if
I take aside the role that I have as CEO
of the company, I do not want the company
being able to make all of those final decisions
without a check and balance and accountability. So I want to use the position
that I’m in to help build that kind of an institution. JONATHAN ZITTRAIN: Yes. And when we talk
about an appeal, then, it sounds like you could
appeal to distinct things. One is this was the rule, but
it was applied wrong to me. This, in fact, was
parody, so it shouldn’t be seen as near the line. And I want the independent
body to look at that. The other would be
the rule is wrong. The rule should change because– and you’re thinking the
independent body could weigh in on both of those. MARK ZUCKERBERG: Yeah. Over time, I would like the role
of the independent oversight board to be able to expand to
do additional things, as well. I think the question
is, it’s hard enough to even set something
up that’s going to codify the
values that we have around expression and safety
on a relatively defined topic. So I think the question is,
if you kind of view this as an experiment in
institution building where we’re trying to
build this thing that is going to have real power to– I mean, I will not
be able to make a decision that
overturns what they say, which I think is good. I think also, it
raises the stakes. We need to make sure
we get this right. JONATHAN ZITTRAIN:
It’s fascinating. I mean, it’s huge. I think the way
you’re describing, I wouldn’t want to
understate that this is not a usual way of doing business. MARK ZUCKERBERG: Yeah. But I think this is– I really care about
getting this right. But I think you want to
start with something that’s relatively well-defined
and then hopefully expand it to be able to cover
more things over time. So in the beginning, I think
one question that could come up is– I mean, it’s always dangerous
talking about legal precedents when I’m– this might be one of my first
times at Harvard Law School. I did not spend a lot of time
here when I was in undergrad. But, I mean, if the Supreme
Court overturns something, they don’t tell Congress
what the law should be. They just say, there
is an issue here. And then, basically,
there’s a process. All right. So if I’m getting it wrong– [LAUGHTER] All right. I shouldn’t have done that. JONATHAN ZITTRAIN: No, no. It’s fine. MARK ZUCKERBERG: I
thought it was dangerous. JONATHAN ZITTRAIN: There
are a number of people who do agree with you. MARK ZUCKERBERG: Oh, so
that’s an open question that that’s how it works. JONATHAN ZITTRAIN: It’s a
highly debated question, yes. MARK ZUCKERBERG: All right. JONATHAN ZITTRAIN: There’s
the I’m just the umpire calling balls on strikes. And, in fact, the
first type of question we brought up– which was, hey,
we get this is the standard. Does it apply here– lends itself a little more
to you get three swings. And if you miss them all,
you can’t keep playing. The umpire can usher you
away from the home plate. This is– I’m really digging
deep into my knowledge now of baseball. There’s another thing– MARK ZUCKERBERG: It’s OK. I’m not the person who going
to call you out for getting something wrong there. JONATHAN ZITTRAIN:
I appreciate that. MARK ZUCKERBERG:
That’s why I also need to have a librarian next door. JONATHAN ZITTRAIN: Very good. I don’t know how much librarians
tend to know about baseball, but we digress. We’re going to get
letters, RIPR mentions. But whether or not the game is
actually any good with a three strikes rule– maybe there should be
two or four or whatever– starts to ask of the umpire
more than just your best sense of how that play just went. Both may be something– both are surely beyond standard
customer service issues. So both could maybe be
usefully externalized. What you’d ask the board
to do in the category one kind of stuff– maybe it’s true that
professional umpirage could help us, and there are
people who are jurists who can do that worldwide. For the other, whether it’s the
Supreme Court or the so-called common law in the state courts,
where often a state supreme court will be like, henceforth,
50 feet needs to be the height of a baseball net– and if you don’t
agree legislature, we’ll hear from you. But until then, it’s 50 feet. They really do kind
of get into the weeds. They derive maybe
some legitimacy for decisions like
that from being close to their communities. And it really regresses
them to a question of, is Facebook a global
community, a community of 2.x billion people
worldwide, transcending any national boundaries–
and for which I think so far on
these issues, it’s meant to be the
rule is the rule. It doesn’t really change the
terms of service from one place to another– versus how much
do we think of it as somehow localized,
whether or not localized through government, but where
different local communities make their own judgments? MARK ZUCKERBERG: Yeah. That is one of
the big questions. And right now, we have community
standards that are global. We follow local
laws, as you say. But I think the idea is– I don’t think we want
to end up in a place where we have very different
norms in different places, but you want to have some
sense of representation and making sure that the body
that can deliberate on this has a good diversity of views. So these are a lot
of the things that we’re trying to figure out,
is how big is the body? When decisions are made, are
they made by the whole body, or is it– do you have panels of people
that are smaller sets? If there are panels,
how do you make sure that you’re not just
getting a random sample that kind of skews in the values
perspective towards one thing? So then, there are a bunch
of mechanisms like, OK. Maybe one panel that’s
randomly constituted decides on whether
the board will take up a question or one of
the issues, but then a separate random
panel of the group actually does the decisions. That way, you
eliminate some risk that any given panel is going
to be too ideologically skewed. So there’s a bunch
of things that I think we need to think
through and work through. But the goal on this
is to, over time, have it grow into something
that can provide greater accountability and
oversight to potentially more of the hard
questions that we face. But I think it’s so high stakes
that starting with something that’s relatively defined
is going to be the right way to go in the beginning. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: So
regardless of the fact that I was unaware
of the controversy around the legal point
that I made a second ago, I do think in our
case, it makes sense to start with not
having this group say what the policies are going
to be, but just have there be– have it be able to say,
hey, we think that you guys are on the wrong side on this. And maybe you should rethink
where the policy is because we think you’re on the wrong side. There’s one other thing that
I think is worth calling out, which is in a typical kind of
judicial analog released here in the US, my
understanding is there’s the kind of appeal route to the
independent board considering an issue. But I also think that we
want to have an avenue where we, as the company,
can also just raise hard issues that come up to
the board without having– which I don’t actually know if
there’s any mechanism for that. JONATHAN ZITTRAIN: It’s
called advisory opinion. MARK ZUCKERBERG: OK. JONATHAN ZITTRAIN: But
under US federal law, it’s not allowed because of
Article III case or controversy requirement. But state courts
do it all the time. You’ll have a federal
court sometimes say– because it’s a federal court,
but it’s deciding something under state law. It’ll be like, I don’t know. Ask Florida. And they’ll be
like, hey, Florida. And then, Florida
is just Florida. MARK ZUCKERBERG: Yeah. So that will– JONATHAN ZITTRAIN: You could
do an advisory opinion. MARK ZUCKERBERG:
That’ll end up being an important part of this, too. We’re never going to be able
to get out of the business of making front line judgments. We’ll have the AI
systems flag content that they think is against
policies or could be. And then, we’ll have people–
this set of 30,000 people, which is growing– that are trained to
basically understand what the policies are. We have to make the
frontline decisions because a lot of
this stuff needs to get handled in a timely way. And a more deliberative
process that’s thinking about the fairness
and the policies overall should happen over a
different time frame than what is often
relevant, which is the enforcement of
the initial policy. But I do think overall, for a
lot of the biggest questions, I just want to build a
more independent process. JONATHAN ZITTRAIN:
Well, as you say, it’s an area with fractal
complexity in the best of ways. And it really is incognito. And it would be exciting to
see how it might be built out. I imagine there’s a
number of law professors around the world,
including some who come from civil rather than
common law jurisdictions, who are like, this is
how it works over here, from which you could draw. Another lingering
question would be lawyers often have a bad reputation. I have no idea why. But they often are
the glue for a system like this so that the
judge does not have to be oracular or omniscient. There is a process where
the lawyer for one side does a ton of work and looks at
prior decisions of this board and says, well, this is
what would be consistent. And the other lawyer comes back. And then, the judge just gets
to decide between the two rather than having to
just know everything. There’s a huge trade here
for every appealed content decision. How much do we want to
build it into a case? And you need experts
to help the parties, versus they each just sort
of come before Solomon and say, this kind of happened. Or Judge Judy maybe is a
more contemporary reference. MARK ZUCKERBERG: Somewhere
in between the two. Yeah. JONATHAN ZITTRAIN: Yeah. So it’s a lot of stuff. And for me, I both find myself– I don’t know if this is the
definition of prurient– both excited by it and somewhat
terrified by it, but very much saying that it’s better than
a status quo, which is where I think you and I are
completely agreeing, and maybe a model for
other firms out there. So that’s the last
question in this area that pops to my mind, which
is, what part of what you are developing at Facebook,
and a lot of which is really resource intensive, is best
thought of as a public good to be shared, including
among basically competitors, versus that’s part of
our comparative advantage and our secret sauce? If you develop a
particularly good algorithm that can really well
detect fake news or spammers or bad actors– you’ve got the PhDs. You’ve got the processors. Is that like, in
your face, Schmitter? Or is it like, we
should have somebody– some body– that can help
democratize that advance? And it could be
the same to be said for these content decisions. How do you think about that? MARK ZUCKERBERG: Yeah. So, certainly, the threat
sharing and security work that you just referenced
is a good area where there’s much
better collaboration now than there was historically. I think that that’s just because
everyone recognizes that it’s such a more important issue. And, by the way, there’s
much better collaboration with governments now, too, on
this, and not just our own here in the US and law enforcement,
but around the world with election
commissions and law enforcement, because there’s
just a broad awareness that these are issues. JONATHAN ZITTRAIN: Especially
if you have state actors in the mix as the adversary. MARK ZUCKERBERG: Yes. So that’s certainly
an area where there’s much better collaboration now. And that’s good. There’s still issues. For example, if you’re law
enforcement or intelligence, and you have developed a– source is not the right word. But basically, if you’ve
identified someone as a source of signals that
you can watch and learn about, then you may not want to
come to us and tell us, hey, we’ve identified
that the state actor is doing this bad thing. Because then, the natural thing
that we’re going to want to do is make sure that they’re not
on our system doing bad things or that they’re not– either they’re not
in the system at all or that we’re interfering
with the bad things that they’re trying to do. So there’s some
mismatch of incentives. But as you build up
the relationships and trust, you can get to
that kind of a relationship where they can also
flag for you, hey, this is where we’re at. So I just think having that
kind of baseline where you build that up over time is helpful. And I think security
and safety is probably the biggest area of that
kind of collaboration now across all the different
types of threats, not just election and democratic
process type stuff, but any kind of safety issue. The other area where I tend to
think about what we’re doing should be open is just technical
infrastructure overall. I mean, that is probably a
less controversial piece. But we open source a
lot of the basic stuff that runs our systems. And I think that that is
a contribution that I’m quite proud of that we do. We have sort of
pioneered this way of thinking about
how people connect and the data model around
that as more of a graph and the idea of graph database. And a lot of the
infrastructure for being able to efficiently access
that kind of content I think is broadly applicable beyond
the context of a social network. When I was at– when I was here– it was in undergrad, even though
I wasn’t here for very long. And I studied psychology
and computer science. And to me, I mean, my grounding
philosophy on this stuff is that basically,
people should be at the center of more of the
technology that we build. I mean, one of the early things
that I kind of recognized when I was a student
was, at the time, there were internet sites for
finding almost anything you cared about, whether it’s
books or music or news or information or businesses. But as people, we
think about the world primarily in terms
of other people, not in terms of other
objects, not cutting things up in terms of content or
commerce or politics or different things. But it’s like– the
stuff should be organized around the connections
that people have where people are at the
centerpiece of that. And one of the missions
that I care about is over time just pushing
more technology industry– more technology development
to the tech industry overall to develop
things with that mindset. I think this is a
little bit of a tangent, but the way that our
phones work today and all computing systems
organized around apps and tasks is fundamentally not
how our brains work and how we approach the world. It’s not– so that’s one of
the reasons why I’m just very excited longer term
about especially things like augmented
reality, because it’ll give us a platform that I think actually
is how we think about stuff. We’ll be able to bring
the computational objects into the world, but,
fundamentally, we’ll be interacting as
people around them. The whole thing
won’t be organized around an app or a task. It’ll be organized
around people. And that, I think, is a much
more natural and human system for how our technology
should be organized. So open source and all
of that infrastructure to do that and enabling not
just us, but other companies, to kind of get that mindset
into more of their thinking and the technical
underpinning of that is just something that I
care really deeply about. JONATHAN ZITTRAIN:
Well, this is nice. And this is bringing
us in for our landing because we’re talking about
10, 20, 30 years ahead. As a term of art, I
understand augmented reality to mean I’ve got a visor. Version 0.1 was Google Glass. Something where I’m kind of
out in the world, but I’m literally online
at the same time because there’s data coming
at me in some console. That’s what you were
talking about, correct? MARK ZUCKERBERG: Yeah,
although, it really should be glasses like what you have. I think we’ll
probably– maybe they’ll have to be a little bigger,
but not too much bigger. Also, it would
start to get weird. I don’t think a visor
is going to catch. I don’t think that that’s– I don’t think anyone
is psyched about that. JONATHAN ZITTRAIN: And
anything involving surgery starts to sound a
little bad, too. MARK ZUCKERBERG: No, no. We’re definitely focused
on that [INAUDIBLE].. Although– JONATHAN ZITTRAIN:
Don’t make news. Don’t make news. Don’t make news. [LAUGHTER] MARK ZUCKERBERG: No, no. Although, we have showed
this demo of basically can someone type by thinking. And, of course,
when you’re talking about brain-computer
interfaces, there’s two dimensions of that work. There’s the external stuff,
and there’s the internal stuff and invasive. And yes, of course,
if you’re actually trying to build things that
everyone is going to use, you’re going to want to focus
on the non-invasive techniques. JONATHAN ZITTRAIN: Yes. Can you type by thinking? MARK ZUCKERBERG: You can. JONATHAN ZITTRAIN: It’s
called a Ouija board. No. But you’re subvocalizing
enough where there’s enough of a need of a– MARK ZUCKERBERG:
So there’s actually a bunch of the research here. There’s a question of throughput
and how quickly can you type and how many bits can
you express efficiently. But the basic foundation
for the research is a bunch of folks who
are doing this research showed a bunch of
people images– I think it was animals. So it’s like, here’s an
elephant, here’s a giraffe, while having a
net on their head. Noninvasive, but shining
light and therefore looking at the level of blood
activity and just blood flow and activity in the brain. Trained a machine
learning system basically on what the pattern
of that imagery looked like when the person was
looking at different animals, then told the person to
think about an animal. So think about– just pick one
of the animals to think about. And it could predict what
the person was thinking about in broad strokes
just based on matching the neural activity. So the question is– so
you can use that to type. JONATHAN ZITTRAIN: Fifth
Amendment implications are staggering. Sorry. MARK ZUCKERBERG: Well, yes. I mean, presumably, this would
be something that someone would choose to use as a product. JONATHAN ZITTRAIN: Yes. MARK ZUCKERBERG: I’m not– yeah. I mean, yes. There’s, of course, all
the other implications. But yeah. I think that this
is going to be– that’s going to be
an interesting thing down the line. JONATHAN ZITTRAIN: But
basically, your vision, then, for a future– MARK ZUCKERBERG: I don’t
know how we got onto this. [LAUGHTER] JONATHAN ZITTRAIN:
You can’t blame me. You brought this up. MARK ZUCKERBERG: I did. But of all the things that I
thought we were going to talk– I mean, this is exciting. But it’s like, we haven’t even– we haven’t even
covered yet how we should talk about tech
regulation and all of this stuff I
figured we’d get into. I mean, we’ll be here for
like six or seven hours. I mean, how many days
you want to spend here talking about this. JONATHAN ZITTRAIN: We’re here at
the Zuckerberg-Zittrain hostage crisis. [LAUGHTER] The building is surrounded. Yeah. MARK ZUCKERBERG: But
I think a little bit on future tech and research
is interesting, too, so we’re good. JONATHAN ZITTRAIN: We did
cover it is what you’re saying. MARK ZUCKERBERG: But going back
to your question about what– probably to– if
this the last topic. What I’m excited about for
the next 10 or 20 years– I do think over the long
term, reshaping our computing platforms to be fundamentally
more about people and how we process the world
is a really fundamental thing. Over the nearer term,
so call it five years, I think the clear
trend is towards more private communication,
if you look at all of the different
ways that people want to share and communicate
across the internet. But we have a good sense of
the cross section, everything from one-on-one messages to
kind of broadcasting publicly. The thing that is
growing the fastest is private communication, right? So between WhatsApp and
Messenger and Instagram now, just the number of
private messages, it’s about 100 billion a day
through those systems alone. Growing very quickly. I’m growing much
faster than the amount that people want to
share or broadcast into a feed type system. Of the type of broadcast
content that people are doing, the thing that is growing by
far the fastest is stories, so ephemeral sharing of
I’m going to put this out, but I want to have a time frame,
after which the data goes away. So I think that
that just gives you a sense of where the hub of
social activity is going. It also is how we’re
thinking about the strategy of the company. When we talk about privacy, I
think a lot of the questions are often about privacy policies
and legal or policy type things and privacy as a
thing not to be breached and making sure that you’re
within the bounds of what is good. But I actually think that
there’s a much more– there’s another element of
this that’s really fundamental, which is that people
want tools that give them new contexts to communicate. And that’s also fundamentally
about giving people power through privacy, not just
not violating privacy. So not violating
privacy is a backstop. But actually, you can kind of
think about all of the success that Facebook has had– this is kind of a
counterintuitive thing– has been because
we’ve given people new private or semiprivate ways
to communicate things that they wouldn’t have had before. So thinking about Facebook
as an innovator in privacy is certainly not
the mainstream view. But going back to the very
first thing that we did, making it so Harvard students
could communicate in a way that they had some
confidence that their content and information would be
shared with only people within that community,
there was no way that people had to communicate
stuff at that scale but not have it either
be completely public or with just a small
set of people before. And people’s desire to
be understood and express themselves and be
able to communicate with all different kinds of
groups is, in the experience that I’ve had, nearly unbounded. And if you can give
people new ways to be able to communicate
safely and express themselves, then that is something that
people just have a deep thirst and desire for. So encryption is
really important because we take for
granted in the US that there’s good rule of law
and that the government isn’t too much in our business. But in a lot of places
around the world, especially where
WhatsApp is the biggest, people can’t take
that for granted. So having it so that you
really have confidence that you’re sharing something
one-on-one and it really is one-on-one– it’s not
one-on-one and the government there– actually makes it so people can
share things that they wouldn’t be comfortable otherwise doing. That’s power that you’re
giving people through building privacy innovations. Stories I just think
is another example of this where there are a
lot of things that people don’t want as part of
the permanent record, but want to express. And it’s not an
accident that that is becoming the
primary way that people want to share with
all of their friends, not putting something
in a feed that goes on their permanent record. There will always be
a use for that, too. People want to have a record,
and there’s a lot of value that you can build around that. You can have longer
term discussions. It’s harder to do
that around stories. There’s different
value for these things. But over the next
five years, I think we’re going to see all
of social networking kind of be reconstituted
around this base of private communication. And that’s something that
I’m just very excited about. I think that that’s like– it’s going to unlock a
lot of people’s ability to express themselves
and communicate things that they haven’t had
the tools to do before. And it’s going to
be the foundation for building a lot of really
important tools on top of that, too. JONATHAN ZITTRAIN: I mean,
that’s so interesting to me. I would not have predicted that
direction for the next five years. I would have figured,
gosh, if you already know with whom
you want to speak, there are so many tools to
speak with them, some of which are end-to-end, some of
which aren’t, some of which are roll your own
and open source. And there’s always a way to try
to make that easier and better. But that feels a little
bit to me like a kind of crowded space, not yet
knowing of the innovations that might lie ahead and means
of communicating with people you already know
you want to talk to. And for that, as you say,
if that’s where it’s at, you’re right that
encryption is going to be a big question and
otherwise technical design so that if the law comes
knocking on the door, what would the company
be in a position to say? This is the Apple
iPhone Cupertino– sorry, San Bernardino– case. And it also calls
to mind, will there be peer-to-peer
implementations of the things you’re thinking about
that might not even need the server at
all, and it’s basically just an app that people use? And if it’s going
to deliver an ad, it can still do that app side– and how much governments
will abide it. They have not,
for the most part, demanded technology
mandates to reshape how the technology works. They’re just saying,
if you’ve got it– in part, you’ve got it because
you want to serve ads– we want it. But if you don’t
even have it, it’s been rare for the governments
to say, well, you’ve got to build your system to do it. It did happen with the telephone
system back in the day. CALEA, the Communications
Assistance for Law Enforcement Act, did have federal law
in the United States saying, if you’re in the business of
building a phone network, AT&T, you’ve got to make it so we
can plug in as you go digital. And we haven’t yet
seen those mandates in the internet
software side so much. We could see that
coming up again. But it’s so funny because
if you’d asked me, I would have figured it’s
encountering people you haven’t met before and interacting
with them, for which all of the stuff about air
traffic control of what goes into your feed and how
much your stuff gets shared– all of those issues start
to rise to the fore. And it gets me
thinking about I ought to be able to make a feed
recipe that’s my recipe and fills it according
to Facebook variables, but I get to say what
the variables are. But I could see
that if you’re just thinking about
people communicating with the people they
already know and like, that is a very different realm. MARK ZUCKERBERG: It’s
not necessarily– it’s not just the people
that you already know. I do think we’ve really
focused on friends and family for the last 10 or 15 years. And I think a big part of what
we’re going to focus on now is around building
communities in different ways and all of the things– all the utility that
you can build on top of once you have a network
like this in place. So everything from
how people can do commerce better to things
like dating, which is– a lot of dating happens on
our services, but we don’t– we haven’t built any tools
specifically for that. JONATHAN ZITTRAIN: I do remember
the Facebook joint experiment– experiment is such
a terrible word– study by which one could predict
when two Facebook members are going to declare themselves
in a relationship months ahead of the actual declaration. I was thinking some
of the ancillary products for in-laws– MARK ZUCKERBERG:
That was very early. Yeah. So you’re right
that a lot of this is going to be about
utility that you can build on top of it. But a lot of these things are
fundamentally private, right? So if you’re thinking
about commerce, that people have a higher
expectation for privacy, the question is, is
the right context for that going to be around
an app like Facebook, which is broad, or an
app like Instagram? I think part of it is
the discovery part of it. I think we’ll be very
well served there. But then, we’ll also
transition to something that people want to be
more private and secure. Anyhow, we could proudly go
on for many hours on this, but maybe we should save this
for the round two of this that we’ll do in the future. JONATHAN ZITTRAIN: Indeed. So thanks so much
for coming out, for talking at such
length, for covering such a kaleidoscopic
range of topics. And we looked forward to
the next time we see you. MARK ZUCKERBERG: Yeah. Thank you. JONATHAN ZITTRAIN: Thanks. [APPLAUSE]

17 comments on “Zittrain and Zuckerberg discuss encryption, ‘information fiduciaries’ and targeted advertisements”

  1. Daily Data says:

    Mark Zuckerberg is a liar.

  2. thriversoffset says:

    That was interesting listen, at times boring, at times not bad. Nothing exceptional though unfortunately, doubt pretty much Zuck will delve deep into encryption, but blockchain authentication not so bad among others. Also not answered facebook outsourcing content review that turned out terrible and the genocide it helped to breed

  3. Epinephrin DNA says:

    fuck yourself Suckerberg

  4. Eng. Charles K. Ngethe says:

    interesting

  5. Soham Bhatia says:

    the irony is rich with this one

  6. rPod Coworking Space (อาร์พ่อด โคเวิร์คกิ้ง สเปซ) says:

    Thanks heaps for sharing.

  7. Thom van Kalkeren says:

    On the cambridge analytica case, once the genie's out of the bottle.. which a central entity like Facebook can't roll-back either. On the other hand, there's nothing stopping people from subscribing to block-lists which can deny access to data consumers (as people are doing with ad networks), which indicates to me Facebook adds nothing to consumer data privacy and safety.

  8. Ryan Rafferty says:

    Blockchain authentication was the highlight in this interview. A definite move forwards!

  9. cary bary says:

    I get people I don’t know from Kansas on my People you may know list. I don’t know anyone in Kansas so I block them so they’ll go away. Then I get more people from Kansas I don’t know lists.

  10. donna hummel says:

    Zuckerberg has a team working on computer/brain interface. Would this not be similar to a computer in everyone's living room? (and everywhere?)

  11. J W says:

    Zuckerberg is one of the most unfairly maligned people on the planet. He is certainly a tough boss, and at times, is not as transparent as we would like. But he's incredibly capable and tries his best to come up with innovative solutions. If you think he's a slimy snake that can't be trusted, just don't use his services and build your own social media platform.

  12. Xanzia Klik says:

    Zuckerberg, a bona fide twat by anybody's estimation tarnishes the already VERY suspicious bl0ck-chain utility (as it should be metered as such). Why does he tarnish it? Well, here's the question – do you want any of Zuckerberg's ideas anywhere near your life???

  13. Chillu Annamalai says:

    Great insight into some of the tough philosophical questions that Mr. Zuckerberg faces as the head of FB. It's too easy to deride him for being reserved (or worse "unnatural") when in reality choosing the right words and framing the choice they made in the context of a tough decision, highlighting the reasons for the particular compromise they had to make is very difficult, if you can at all imagine being in his shoes. Thanks for this interview!

  14. x0tek says:

    28:38 Zuckerberg's smile fades instantly, he needs a body language coach or something, jeez

  15. Killacamfoo O.G. says:

    But what about that goat?

  16. Will Kessler says:

    Zittrain does an amazing job with the interview. Respectful, probing, and engaging. Zuckerberg does not convince.

  17. S. Yusof says:

    Fundamentally

Leave a Reply

Your email address will not be published. Required fields are marked *