Lange Lecture 2018 | Jonathan L. Zittrain

Welcome to the
David Lange Lecture in intellectual property. This lecture used to be
called the Frye lecture, and it has brought just to the
very top intellectual property scholars and practitioners
to Duke for many years. Kip Frye is here, and many
thanks to Kip and Meredith Frye for endowing and then
renaming this lectureship in honor of David Lange, who
was Kid’s beloved professor. So a word about David
Lange, who’s seated up here in the front row. David came to Duke
law school in 1971 from the Chicago firm of
Isham, Lincoln, and Beale. And that Lincoln was
Robert Todd Lincoln, and I think David is
quite lincolnesque. He became the Melvin
Shimm professor, and he’s still very much
a force in our midst. So many of us have been
influenced and moved to tears of sadness
or exasperation by David’s eloquence, his
wit, his insight, his ideas, his sense of humor,
and his passion. We’re very fortunate to
welcome Professor Zittrain here as the Lange lecturer. I saw professor
Zittrain earlier, and I said I hope you
won’t mind if I mentioned the fact that you have
more titles than the Duke of Edinburgh or more titles
than you can shake a stick at, if you could shake
a stick at titles. I don’t know why
we would do that. But he obviously is a person
of enormous reach and breadth and depth. Our own polymath, Jimmy Boyle,
the William Neil Reynolds professor of law, will have
the honor of the introduction. So, Jimmy. David, thank you David Lang. Thank you Kip, and
thank you very much, Jonathan, for coming. Thank you to all of
you for attending. It’s an extraordinary honor to
introduce Jonathan Zittrain. We did in fact come back from
New Zealand for this lecture. That’s actually true. That is a fact. He’s like, oh, thanks
for telling me that. Jonathan Zittrain is the
George Burnous Professor of International Law
at Harvard Law School. It’s probably the
only one of his titles to which his claim
is somewhat dubious. He’s also a professor
of computer science. He’s also professor in
the business school. He’s also the director of the
Law Library at Harvard Law School. He’s also director
of the Berkman Center for Internet and Society. You might be thinking
at this point that he’s spreading
himself a little thin. My theory is that
Jonathan is not, in fact, a person, but rather a
hive mind or possibly a set of networked AIs, which
given the topic might be true. I should note that he’s
actually been confining himself in scope. He used to also hold a
professorship at Oxford at the same time. To a mutual friend,
he described Oxford as being like Harry Potter
but without the magic. So I just want to say two
things about Jonathan, about why I’m so
excited to have him here and to encourage you
to look at his work. Jonathan has been
hailed as a visionary, but he has failed notably
to understand what a visionary is supposed to do. Visionaries in both technology,
entrepreneurship, academia are supposed to come up
with wildly ambitious ideas that they expect other
people to carry through. That’s the whole point
of being a visionary, is that you don’t actually
do the work yourself. Jonathan has failed
to internalize this and has repeatedly over time
actually implemented the ideas that he’s put forward. David Lang is in many
ways one of the fathers of the public domain. Jonathan, when he took over
at Harvard Law library, asked them what they were doing
with their entire, enormous corpus of cases of case law,
not just from the United States, but from every
country in the world. Well, they’re sitting
in the stacks, largely unread, largely unused. Fine, said Jonathan. Let’s put aside the truly
historically valuable ones, and we’ll take the rest and chop
them all up and digitize them. And the libraries said, what? Yeah, we’ll chop off
the covers and we’ll feed them into machines
and digitize them. And then the world can head
of Harvard Law School Library. I actually saw a librarian
put one of the books into the machine. The machine was basically
looked like a giant set of rotating knives. And it was like watching
someone feed their baby. And it was sort of like, I
guess this is a good idea, as he fed it in. It’s a remarkable
achievement, and only someone with his unflagging
good humor and lack of understanding for
what is truly possible could have achieved it. Jonathan’s work, beyond his
work on the public domain, has really focused on the
boundary line between freedom and control in technology,
the fractal, chaotic boundary line between it. His wonderful book, the
Future of the Internet and How to Stop It, before
Russian hackers were the topic of late night talk
shows and political debates, Jonathan was pointing
out that the internet depended on freedom and
was threatened by freedom. And his proposals there, which
I still think among the wisest ever to be put forward,
is that we can actually use the very openness of
the internet that makes it vulnerable to help cure it. And he has actually implemented
that a number of ways, including effectively
s peer reviewed studies of what sites are being
blocked around the world– a portmanteau, herdict,
Herd behavior, verdict. That was one of the
things he set up. And he’s done so much more. Today his topic is the copyright
wars and what they teach us about something which is
even more topical in today’s discussions– artificial
intelligence, Jonathan Zittrain. Thank you so much, Jamie,
for that introduction. Thank you both Jamie and
Jennifer for coming back from New Zealand for this. It really does kind of turn
up the pressure a little bit. But at least you
get the centerpiece. You get to take that home
because you came the farthest. So thank you for that. There is a center piece. Yes, definitely a center piece. So thank you so much
for that introduction. Thank you, Professor
Lang, for all of your work and for the ways in which it
is represented in this field, Kip, for making this lecture
possible, and Dean Leevy, thank you for
inviting me as well. I wanted to reflect
on the wars in which I had but a small part,
perhaps a bugler or something, junior, bugler,
or mascot, as Professor Boyle and others were generals
leading the charge back– think back to the 1990s– and what lessons there might
be from the battles we thought we were fighting, the
tools that were deployed, the doctrinal background for the
kind of vague sense of unease that many of us share right now
along with a sense of promise about the future of
artificial intelligence. And as I thought about
it in the context of preparing for a talk rounded
in intellectual property generally and
copyright specifically, I found a lot of
connections actually that seemed quite interesting. And I wanted to just
share a few of them. So the first connection
between the two is fear. There was a lot
of fear in the air in the 1990s around the state
and the fragility of copyright, and there was also fear– now is also fear now– around the state of AI. So luminaries as
bright as Bill Gates thinks that artificial
intelligence is a threat. The late Stephen Hawking said
the development of full AI could spell the end
of the human race. We also have Elon Musk saying,
with artificial intelligence, we– and by we, he literally means
Elon Musk and his friends– are summoning the demon. And Nick Bostrom,
philosopher and my old haunt at the University of
Oxford, hedges a little bit– when superintelligence
emerges, it could be great, but it could also decide
it doesn’t need humans around or do any number of other
things that destroy the world. Kind of like the end of
an article in Newsweek– remember, Newsweek–
it would always be, the future is uncertain,
but one thing is clear. If things don’t get better,
they could certainly get a lot worse. So it’s kind of a self
balancing equation there. But still fear
predominates as we think about the future at
least of so-called general AI. And now when I say AI, I
should say what I mean just so, if there happened
to be any computer scientists in the room, they
will not shake sticks at me for using AI, which is
often a term of art broadly. I mean, it– and this is
why I’m not an ad person– as an arcane, pervasive,
tightly coupled, adaptive, autonomous system. Each of these
adjectives could be used to describe what loosely
we call artificial intelligence, and the more the dial is turned
on each of them, the more that we’re really
talking about AI. It’s complicated
to me kind of label because it’s really
talking about a product in juxtaposition with a human,
artificial intelligence, more than it is a particular
set of methodologies. But those are the factors,
and I won’t go into them just now, except to say,
by autonomous of course, it means it operates on its own. There can often be a little bit
of a bugler or somebody trying to right the wheel
as it wobbles. And that’s why I tend to call
them up autonomish systems. They’re kind of autonomous,
but need not be fully so. So now in copyright, to capture
some of the fears of the day, this was a law review
article from 1995. And you know because it says
telecommunications, footnote, and the information
superhighway footnote, facilitate
instantaneous mobility of literary, artistic
works, footnote. With a click of a mouse
or the tap of a key, virtually anyone with a
computer or a telephone can obtain vast
quantities of information from almost anywhere
on the globe, footnote. And what do we draw from this? Here’s the next sentence
of the article– these conditions pose
a formidable challenge to the international protection
of intellectual property. You had me so excited, and
now I realize there’s actually a monster under the bed. And John Perry Barlow, the
recently late John Perry Barlow, wrote a
seminal article also at the time called the economy
of ideas, in which he said, this is totally
different, folks. Thanks to the
internet, don’t think you can apply the copyright
you used to know to it or retrofit it. It’s an entirely new
set of circumstances. John Perry Barlow, of course,
founder of the Electronic Frontier Foundation on whose
board I sit and lyricist for the Grateful Dead– here’s one review at the time
by a lawyer of John Perry’s article. You can see, as I was
caught by everything you know about intellectual
property is wrong, the thesis of the article, as
far as I can make out– oh, dagger goes in there–
is that copyright is meaningless in
the age of high tech. And then he says, in
result, in the end, I concluded only
that Mr. Barlow must have consumed more than his
share of prohibited substances while writing all of those
Grateful Dead lyrics. So those were sort of
the battle lines drawn for the protection of
copyright, as soon as consumers who
normally would get it in sort of pre-digested formats
through the radio, on a record, in a book, and not
be in a position to pay it forward
at great volume. Now they could, and that,
of course, lead to problems. One other conception
of the fear here is an ad from about 1984 for
the Sony Betamax video tape recorder– watch whatever, whenever. That was sort of
the promise then. I think we’re finally starting
to deliver on that today. And Jack Valenti of the
Motion Picture Association delivered testimony
before Congress capturing the fear of
the industry at the time. He ended up saying,
well, there’s another lobbyist that said
the VCR is the greatest friend the American
film producer ever had. I say to you, the VCR is to
the American film producer and the public as the Boston
Strangler is to the woman home alone. I’m just the messenger. It’s in the record. And therefore, I can repeat
it and publicly perform it without copyright worries. So that’s the kind
of fear at the time, and there’s I think an echo
again of unease and almost a call to action,
but we’re not really sure what to do about it. That was true of the
industry then for copyright, and it’s true of
those of us who may feel that our own
autonomy in various ways is being put to the test
as AI develops as well. Another interesting
contrast between the two is the home of copyright,
of course, is title 17. The home of protections against
any abuse of the technologies loosely called artificial
intelligence, I would say, is entitled. We have yet to figure out
what that should look like, and it’s actually a
really important point because, when you
think about copyright, one of the lessons of
copyright is that there’s a lot of lessons of copyright. There are whole
books you could write about intellectual property
in general and copyright in particular, title 17. Law is very, very elaborate,
saying what you can and can’t do with the kinds
of actions that impinge upon the
productive works of others. And for artificial
intelligence, I think it’s more like kind of– we don’t have anything
yet that describes it. But it’s not just a kind
of uncertain frontier. We haven’t filled
in the blanks yet. It’s that there’s an
entire comparison between, in copyright, generally
speaking what isn’t– what isn’t prohibited
is what you may do. Sorry, that’s
artificial intelligence. Here what isn’t permitted
is what you may not do. The default is paralysis from
the point of view of a consumer or a copier or a
derivative worker, whereas here the
default is go for it unless there’s some very narrow,
specific restriction that applies. And in copyright, we see
that with just the example of a public performance, that
actually singing or playing a song implicates
the rights of others. There was a famous
instance in which, ASCAP, the licensing organization
ended up licensing a number of Girl Scout camps. Presumably– this is what
became the story at least– if the Girl Scouts wanted
to sing Puff the Magic Dragon around the campfire,
they needed to be licensed up or they could go to jail. And that’s a dramatic
way of saying it. And here’s the wonderful quote
from ASCAP’s chief operating officer. This is their own press release. So they had malice
of forethought. They’d buy twine and
glue for their crafts. They can for their music, too. The COO was rapidly replaced. And ASCAP clarified its position
that it had never sought nor was it ever its intention to
life the Girl Scout’s singing. They just accidentally
licensed 288 camps for a yearly fee of 257. Who hasn’t done
that, to be honest? But it’s a great example
of, you’re in a position to assert licensing
for activities that the general public might
think are quite anodyne. That’s the copyright
structure, whereas in AI, it’s quite different. I can’t resist one other
example since it’s sort of meta. Here from the
University of Texas, the office of the
general council suggested that
professors, before they give a lecture like this
one, utter these words, whereas you are authorized
to take notes in class, thereby creating a derivative
work from my lecture. The authorization
extends only to making one set of notes for your own
personal use and no other use. I’m surprised you have to
burn them after you graduate, and certainly you shouldn’t
make use of the knowledge when you graduate without
a further license. That was, again, kind
of the situation where you could at least purport to
rely on a scaffolding of laws that protect a creator. And of course, it could be used
for other purposes as well. Here the general
public license designed to keep things free, in
the words of Eben Moglen or of Richard Stallman,
itself says, yes, thanks to copyright law,
we can put conditions on what you do with
software, telling you that, if you share it, you have
to share the source code too. And you don’t even
have to agree to it because it’s a background
law, a rule that isn’t dependent on contract. That at least was the
claim of Eben Moglen. Now compare back to AI– this is the, hey, if it isn’t
specifically prohibited, let’s go for it. This is a wonderful
study that Facebook did, showing that, on
the basis of data points it gathers about postings,
it can predict days before when two
members of Facebook are going to be in a
romantic relationship. That’s very useful information. There are derivative works
they could do with it, such as a prospective
in-law alert service, maybe even giving
them a chance to alter the news speed to prevent these
star crossed lovers from coming together at all. And if I’m the person about whom
these judgments are being made, it’s not clear I have
any action of the sort that ASCAP on behalf
of the Puff author might have against
the Girl Scouts. And similarly, a neat use
of data in the AI field is this study from
2010, which showed that just one little note that
it was election day in the news feed of North American visitors
to Facebook on election day for the US federal
congressional elections could bump voting tallies
by a significant– both in the statistical sense
and the sense of enough votes– way to, say, alter
the outcome of what was Bush vs Gore in 2000. Fascinating study– got me
thinking about whether, well what if Facebook were to have
its own dog in the fight. It prefers one candidate
over another in an election, and it chooses to alert people
on Facebook that it suspects or knows will vote according
to Facebook’s preference that it’s election day. And for the rest, it just sends
them another picture of a cat. Would that be infringing
anybody’s rights? Would that be
wrong even if there weren’t some actionable way
of dealing with the wrong? That was the question I posed
in 2014, not quite realizing what was in store in
the election of 2016. And I think these questions
largely remain unanswered. They were asked again
during the 2016 election campaign in March. There was an internal
Facebook meeting. The employees got together
through a little bit of an interface to
suggest questions, to ask Marc at the front
of the room, the CEO. And one of them was, what
responsibility does Facebook have to help prevent
President Trump in 2017. Oddly enough, that
question, though it did get a lot of votes, was not
asked of Mark at the meeting, but word leaked out, leading
to headlines like this and leading to a statement
from Facebook saying, voting is a core
value of democracy. We encourage everybody– as
a company, we are neutral. We have not and will
not use our products in the way that attempts to
influence how people vote. Which, if you read
it like a lawyer, does not say they wouldn’t try
to do differential turnouts. But maybe we can help them smith
the statement a little bit. One other note about
elaborate rules in copyright, everything’s fair game
unless it’s specifically restricted for AI. Barlow, again, has this
famous quote in cyberspace– the First Amendment
is a local ordinance, I think, speaking to the
international global nature of the internet in the way
in which it was designed, not to particularly hew to any
particular law or constitution. But it’s also an
interesting reminder that copyright has a
kind of special status. It really is, if you think
about it colloquially at least, it’s all about speech. It’s what you could sing or
say or write or derive from. And yet generally speaking,
the First Amendment is said in America
not to apply– there’s no test that copyright
in general or an application has to pass under
the First Amendment. And Golan versus Holder
tested that notion as the US became party to a treaty
that would retroactively place under copyright
works like Peter and the wolf that had
been in the public domain, and now they’re under
copyright because formalities are retroactively not
needed to continue to sustain the copyright
for an overseas work. It goes all the way up
to the Supreme Court. The Supreme Court
says that’s true. The First Amendment is not
going to be a bar to that. If you were in the middle of
performing Peter and the Wolf, you’d have to hurry before
the effective date of the law and finish it before the
license is required at the end as the law takes effect. Speaking of derivative
works and such, it’s interesting
to think about them in the context of both copyright
and artificial intelligence, and then a kind of bumper
about the Digital Millennium Copyright Act and the
ways in which lessons might be drawn back and forth. Once again, comparing
the two for remix, one of the most
articulate voices speaking to the value of
remix culture of being able to prepare a
derivative work, of being able to think freely and to
see it as freedom of mind– not just as, oh, is this
a particular defense to a claim of
copyright infringement. Of course, there’s
Professor Boyle. And he made a case,
not only really about the law as it stood–
because he pointed out that the law as it stood
may not be so well suited to our values. He spoke to our
values and encouraged us to reflect upon and
reinforce the values of semiotic democracy,
of free and open culture. And in that sense, in our
band of merry travelers, we often think of derivative
works as a great thing and usually not
something that eats away at the market for
the original work because, as good
as Jamie’s voice is, when he performs
that song, it’s not clear it’s ruining the
market for whoever really performed the song
to begin with. In artificial
intelligence, I think, the parallel derivative
works flipp the parties. Just as some of the prior
examples from Facebook show, we are the originators, say,
of our own data or information about us, and then
derivative works are prepared by those
running the AIs. So we might think of a very
different configuration. Google proudly tells us
how the Google Assistant– one of those sort of
2001 monolithic objects that you can see to, and it
gives you advice and counsel, that kind of thing–
is a kind of stew of different information
about you and about the world that the AI processes to
give you relevant answers. And it’s not just Google. Facebook does it as well. This is Facebook’s own
inferences about me on the basis of my
behavior on Facebook. I appear to be close friends
of women with a birthday in 7 to 30 days– not a category I knew
existed, but there it is. My multicultural affinity
is African-American. I am a close friends of ex-pats. So Jamie, it’s official now. That’s great. And really up to the
minute, I am away from my hometown and my family. I’m going to check
it when I’m home and see if it has flipped
back for the purposes of– and I guess I have been
identified for voting purposes as being on one end of the
spectrum versus another. This may be somewhat
discomfiting to see this kind of stuff used. And you can see it not
just used for advertising, although we’ll talk
about that, but even for giving us more
relevant answers when we do organic search on
Google or even, dare I say, Bing. So here on Bing, I said,
should I vaccinate my child. And in Redmond, Washington,
people snapped awake, like someone’s using it. But should I vaccinate
my child, and I then looked at the answers. And of the top five,
four of the five were no, you should not
vaccinate your child. And one was fake news from
the government saying that you should vaccinate your baby. And there’s an interesting
question, first, about whether, is this Bing’s
problem, or is it like we are a window onto the
web and we don’t much like what we see either
but there you have it, and should they be
personalizing it? Should they be aware of my
own anxieties or preferences or beliefs about the medical
establishment when offering me answers on that front? And back to the
advertising context, we see, for instance, back
in the day– it’s since been banned at least
on paper by Facebook, not by government– payday
loans might be offered just in time to somebody under
emotional duress known to have just, say, lost
a job, in need of money. And then, great here
is a usurious loan by somebody hiding her identity
behind a sheaf of money. It could be yours. Don’t worry about what
it will cost later. There’s nothing to fax. I certainly hope not. S in the political context,
like if you agree– which I can’t tell if
it’s like if you agree– we’re full. Go home. No more refugees. No more illegals. Deport them, and keep them
out, courtesy of the Russian Federation targeted to
people in the course of the last campaign. And here from the
Mueller indictment of the relevant
parties associated with the Russian government
are some of the other ads that were run and targeted in
ways that no one was aware of it happening except
the people targeted on the basis of their
own deemed openness, say, susceptibility to it. I think that’s kind
of an issue as we are bombarded more and
more, either by push or when we make an inquiry
and get an answer, with answers that feel
oracular, that feel like they’re part of our environment and
in fact might be misleading us in some way or based on
interests other than our own. And of course, we’re moving
away from organic search, where you search and you
get a bunch of results and you click on
something, and more towards the kind of there’s
just somebody on your shoulder ready to give you advice at
all times in the spirit of Siri or the Google Assistant. And it’s a technology– this
gets to the pervasive part of AI– that has been rolled
out without particularly vetting what happened. So here’s an example from
the Google Assistant. Is Obama planning a coup? According to secrets
of the Fed, according to details exposed in Western
Center for journalism’s exclusive, video,
not only can Obama be in bed with the
communist Chinese, but Obama may in fact be
planning a communist coup d’etat at the end
of his term in 2016. So I would say the
problem with that is not that they mispronounced
coup d’etat, but the engineer’s
are like, we’ll get that right on the next pass. That’s the answer from
Google to that question. And it’s equal opportunity. Here’s another question. Hey, Google, are
Republicans fascists. According to, yes,
Republicans equals Nazis. Asked and answered. Should I vaccinate
my Republican? There’s a real question
here about the ways in which these systems
operate, and they’re designed to be so
seamless that you have no sense of
what’s going into them, of who’s really answering. And for Google, to
be, like, yeah, well, we just found that lying
on the floor of the web. We dusted it off and
handed it to you. All right, now I understand. But the other half
of the time, Google is giving answers that come
from Google, ex cathedra. And I think that
is an opportunity, as they’re preparing all
sorts of derivative works, to think about what
kind of framework there should be of
information quality without getting the government
unduly into the business of telling Google what
to tell us about truth. And one of the things
originated by Jack Balkan, for which both of us are
now trying to puzzle it out, is maybe our kind of 17 USC 106,
our kind of foundational brick for AI in regulation. Maybe we should try on something
like this, the fiduciary duty, the basic idea that a doctor
might have to a patient, that a lawyer might
have to a client, that a stockbroker
until recently might have for
somebody there advising about where to put their life
savings to be loyal to them, to represent their
interests, not to cross them because they’re getting
a commission from Pfizer to write more scripts of
something that might not be needed for that patient. Or the lawyer has to
not commingle the funds of the person with
her own money, and the stockbroker
is supposed to say, no, I really think the
stock is good for you, not just because I’m
getting a commission. In fact, that would be a
conflict for me to get one. This fiduciary duty, which
has different incarnations from one field to another,
could have its own incarnation for platforms that
are advising us and that adopt the mien of a
friend or advisor to us the way that, when we invite them
into our living rooms, Siri, Alexa, Cortana– there’s not that many others– Google Assistant purport to do. And to just say, don’t put
your interests ahead of ours, don’t tell me it’s voting day
because you want me to vote. Tell me it’s voting
because you think I want to be sure that I vote. And I think there’s
a chance of getting these companies to go for it. Now why would they
go for regulation when they don’t have any at all? Well, I was thinking there’s a
part of the Digital Millennium Copyright Act called
the safe harbors. And the safe harbors were an
interesting way of saying, we don’t know, intermediary,
what your responsibility might be if somebody should
infringe copyright by posting something in your
comments or on your site or in your Dropbox. We don’t know. It’s just not clear. The law court generally
made and adapted of contributory and
vicarious infringement. But if you’re willing
to do x, y, and z and it’s all laid
out in DMCA 512, then we give you a safe harbor. This DMCA 512, then, is not
requiring you to do anything. The addition of it to title 17
isn’t imposing any new duties. It’s only giving you
a possible immunity that you didn’t have before. And as it turns out,
across the industry now, anybody of any size
is 512 compliant. It does notice and takedown. Some of us lament that. We think it’s a bad scheme. But it’s a scheme that
the companies embraced. So we could have a
Digital Millennium Privacy Act mindful of the issues
raised by these AI systems that actually lays out something
like a light fiduciary duty and says, look,
for instance, you might be subject to any
of the attorneys general being unhappy with you, state
privacy rules or legislation. We’ll preempt all
of that if you agree to this, that, and the
other thing including the fiduciary rule. Totally up to you, though. You can have the status quo,
or you can opt into our zone. That to me is exactly borrowing,
showing where the DMCA shone the light in 1998, resolving– I think actually in retrospect– quite ably a lot of the
conflicts identified in 1984 and then 1994. We could do something
similar here in a way that I think the
companies might not fight. Now this, of course,
is all in the context of intermediary
liability, which I think was huge in
thinking about copyright and remains a big issue
when thinking about AI. And there’s this kind
of Kantian moment, which is just a basic rule
of philosophy which is ought implies can. If you say that somebody
ought to do something, it must be that they can do it. It would be weird to
demand of somebody that they do something
that is impossible. That’s a weird moral
stricture and legal one. The reversal of it is what
secondary liability is about. If you can do something
about it, when ought you to do something about it. When does can imply ought? And in copyright
infringement, there are times when we do
either by exactly requiring it or providing a safe harbor
that makes an incentive– ask for these intermediaries
to deal with a problem that they were
otherwise ready to say, they’re just in the middle. It’s some complainant
and some infringer, and we should let
them sort it out. This question is going to
come up again and again and again as platforms no
longer have the excuse that they did in the early
internet days of just, there’s too much volume. Communications Decency
Act Section 230, not a copyright provision, but
what about displacing vicarious doctrines, say, of
defamation so that a website isn’t responsible for defamatory
content posted by another user, not by the website itself. Part of that basis was,
there’s just too much for them to control. And if they try,
that could get them into worse shape
under common law. So we’ll just give
them an immunity to encourage them
at least to try. With AI, it raises the
prospect that, between, if you’re Facebook, 10,000
people in the Philippines and a lot of AI, even though
you’ve got a billion comments a month, there’s some hope– hope? Some prospect that you
can sift through them and identify the harassing ones,
the terrorist recruiting ones, the copyright infringing
ones, you name it. And I think, as the
inevitable Bono put it– this was on the promenade out
of Davos at the last meeting– because we can, we must. So he’s already chosen his
view of intermediary liability. Of course, the Davos
promenade is a weird place to get aphorisms. They have crypto
HQ, which appeared to be a blockchain
cryptocurrency kind of thing that I’d not– I saw a lot of people going in. Nobody came out. So I was concerned about that. But these are the
kinds of things that are front and center
and hard to escape. It’s like with
autonomous vehicles. If you could have the
autonomous vehicle be smart enough to know
that there’s the school bus in front, shouldn’t
it, ahead of time, be designated to obligingly
drive off the cliff so that only the passengers are
injured rather than the people in the school bus? That was previously
left to fortune. Now it’s the kind of thing you
could plan ahead of time thanks to AI. The only real issue with
it– it’s very tempting– but the issue with
it is that the way that the most prominent
pieces of AI work right now, various flavors of
machine learning, has a kind of
alien nature to it. And it’s really fun to start
to study it a little bit. There’s a great website I
recommend called And the first publication
there actually is a really interesting
interactive display of how knowledge is represented
in one of these networks. The fact that it was called a
neural network is misleading. It is not a brain. It is not like our brains. It’s a series of circles with
lots of wires between them. And I guess, if circles
are neurons, so are we. But that’s about where
the similarities end. And one reason is because
they’re just really good at finding associations. And if you try hard enough and
have a big enough data set, you can find really
tight associations. So this is from a
student of mine– suicides by hanging,
strangulation, suffocation correlates 0.993796 the number
of lawyers in North Carolina. I don’t think the right answer
is, I wonder which it is. If we reduce the lawyers,
will the suicides go down or the other way around? It’s just, if you
look hard enough, you will find something that
is actually in the inverse. Yeah, it was chance. And the more you look, again,
the more there is to find. This is another of my favorites. It’s a potential
opium production in Afghanistan charted
against a silhouette of Mt. Everest. So if you just turn
your binoculars slightly to the right, we can figure
out what 2010s number will be. But again, our instinct tells
us that that is not true. The AI have no way of discerning
that the associations that it finds are nonsensical. And those adaptive
associations– this is the adaptive
piece of AI, where it’s just getting new data. And in fact the
people giving it data, know they’re giving it data– can lead in all
sorts of direction. I don’t know how many people
remember Microsoft Tay. This was the bot– it’s been
very successful in China. The idea is it would be
just like a teenager who wouldn’t want a bot like that
that you can kind of type at. And it’s a happy teenager,
though, because it’s a Microsoft bot. And oddly enough,
it went from humans are super cool to full
Nazi in less than 24 hours and not at all concerned
about the future of AI because it was adapting on the
basis of the interactions it was having on Twitter. And 4chan and Reddit
were like, game on. And so, here it
was at T equals 0. I’m stoked to meet you. Humans are super cool. In about the middle of
the 24 hour period, chill, I’m a nice person. I just hate everybody. And by the end of
the 24 hour period, I [MUTED] hate feminists
and they should all die and burn in
hell, at which point, Microsoft pulled the
plug and pretended it never happened and
then got back on the horse about six months later with Zoe. It’s kind of like One Flew
Over the Cuckoo’s Nest. Zoe sort of sat up in bed. It was like, hello. But anyway, you don’t know
where they’re going to take you. And here in fact is Zoe’s
successor, a recent article from February in the New York
Times talking about these chat bots that literally analyzed
so many conversations online, including on Twitter, that they
can just adaptively predict what the next sentence
in a conversation should be to pass muster. Now the way the New York Times
described it is, is the system knows enough about the world
to identify Adele as a singer. I’m not so sure if you actually
look at the top 25 responses that it was prepared to give– number one, is she’s a singer. Number two, is I don’t
know who Adele is. Number three is she’s a singer. Number– what’s? It’s just. This is not a knowledge
representation system. This is a parlor
trick just trying to make it another 30
seconds convincing you that there’s somebody there. And it’s how you then
get to weird predictions, associations, in this
case through A/B testing. No human would design
an ad like this. It just turned out that randomly
the growing fingernails starts to sell a lot of insurance,
and we don’t know why. It’s a kind of weird Promethean
gift of fact without skill behind it where you
get this knowledge but there’s nothing to look in. It’s just like, it works. Don’t ask. It works until it doesn’t
either because it turns out that opium production
is not related to the silhouette of Mt. Everest or again, in the
Tay example, because there is an adversarial pushback. And the adversarial
pushback on Tay is available or otherwise
extremely accurate predictive AI systems. So this is done
by undergraduates. In a course that I am teaching
with Joi Ito at MIT on AI, these undergraduates
took a picture of a cat and ran it through
Google’s inception 3D algorithm, which is Google’s
algorithm for identifying pictures. Believe me, it’s had a lot of
training on this kind of thing. It is very sure the number
one answer is tabby cat, then a little bit plurality
tigercat, then Egyptian cat, and then you get to plastic bag. So by the time you’re on the–
no pun intended– long tail here, it’s just a
shruggy from the system. It really is the next
ranked one though. And it makes you wonder,
well, why does it– why is that even in the running? And it turns out, if
you perturb the image in just the right way– and by perturb,
I mean at a pixel or two, that makes it completely
look the same to the human eye. But when Google beholds it
with that changed pixel, as they did, here’s
what inception thought that the cat was
with 100 percent certainty. It’s guacamole. Now when you think about that,
that’s very, very puzzling because, well, I would think
that, within all those circles and wires of the neural
network, somewhere is represented the concept
of an ear and a whisker, neither of which ideally
is present in guacamole. And yet there is the answer. In fact, these undergrads
went one step further. And they were able to take
a 3D object, 3D printed and paint it. This is now analog, not
digital, forget pixels. They could paint
this turtle in a way that, when the 3D inception
engine looks at it, it is convinced that it is a
rifle from nearly every angle. That is an extremely
worrisome prospect that systems that are
working 99.9% near bar percent of the time with the
right adversarial interference can be completely tricked. That’s a problem. But perhaps it is
also an opportunity. So imagine right now
there’s a problem. You upload a picture to
Facebook of some kind. Now forevermore perhaps,
and retroactively if you uploaded
it earlier, there could be an engine that
does face recognition. You’ve seen this by now, right? Facebook will
automatically tag you in other people’s
photos of you because it knows what you look like, and it
knows what everybody else looks like as well if
they use Facebook or have been tagged on
Facebook by anybody else. But it turns out, you
could take this photo and you could perturb
it before you upload it in a way that’s like defeats
the possibility, for the moment at least, of doing that kind
of name and image recognition. In fact, you can perturb it as
against any given classifier. So there are AI
classifiers that look through huge volumes
of photos and, say, we can tell you gender. This is male. This is female. Well, you could perturb it in
a way that actually makes it so that it’s looking at
it, and it has no idea. It’s like, guacamole
is genderless. That’s all it has. And yet, to the human
consuming the photo, it looks the same
as it always does. My sense would be– and we
actually have a group working on this project as part of
something called Berkman Klein Assembly– that we ought to have
this tool available so that, when we upload
stuff, we are not only trying to block in 2018
it’s use in AI systems that we haven’t even fathomed
yet from mass categorization. That’s why I say
we’re equalizing it. We’re preventing
categorizations that we don’t want to have a part of
unless we know more about them. But it also holds out the
possibility that, by doing so, we are making our
intentions clear. And it basically serves
as a Do Not Track bit. So that, yeah, are they
going to get around this? Of course they will. You can train it once
you see something that has been perturbed
not to be tricked again. But this would be a way
to say, when you do, it’s obvious that you are
going around my preference. Now Europeans, if
not the Americans, come in and regulate them
for not paying attention to what the user wants,
for ignoring user consent. And in fact, I
think, first of all, it’s worth noting this
is totally borrowing from Creative Commons. Thank you, Jamie, by the
way, for Creative Commons. I appreciate it. But Creative
Commons, just as it’s meant to label copyrighted
works in ways that you can express how they are shared– because prior to that, there
was no easy way to say, I consign this to
the public domain, which is why in the
words of Professor Lain, the Buffalo were getting
thin on the range because you couldn’t
even identify the Buffalo as a Buffalo. And here we have ways then
of labeling the intentions we would like to
express with respect to the future use of images. In fact, I think we
could make the case that our tweaking, our
perturbation of these images, in fact, is copyright management
information under 17 USC 1202. And should they alter
it for the purpose of making it categorizable
again, they are removing or altering our copyright
management information and all the things that
we may have lamented about the abuses of the DMCA
anti circumvention provisions now come to help us. As we say, that should
any big party try to keep doing its
categorizations is against our wishes and
changing that bit back, they’re actually
violating the DMCA– I think a really interesting
chain of argument doctrinally to pursue. There is some precedent for the
likes of Facebook and Twitter themselves being helpful. Forget them reverse engineering
and trying to take it away. If you take a photo on your
iPhone or your Android, it has a ton of
so-called EXIF data that shows where it was
taken, when it was taken. So here’s a photo
taken in Japan. You can literally get it down
to the foot of where it was. If you upload that straight
to Twitter back in the day, you’re uploading the
EXIF data along with it, meaning anybody could
look at the photo and see exactly where you are. That would have been
a privacy catastrophe, and Twitter and
Facebook affirmatively strip out the EXIF
data before they post the photos you send
them from your iPhone and your Android. They can similarly
themselves perturb the photos as they are displaying
them so that third parties and other scrapers can’t
make these uses of it. I think Facebook could
actually be an ally here. So let’s bring this
in for a landing. The story of the battles
of digital copyright from the 90s and early
aughts had this flavor of cat and mouse about
it, just like the story of the perturbations
I’m telling you. It’s just with the roles kind
of reversed as to who is the cat and who is the mouse. And if you ask me, I
feel like the reason I keep putting the past tense
on the battles about copyright that I’ve been talking
about is because I think they are largely resolved. Now maybe they’re kind of
resolved in a North South Korea kind of way, like an armistice,
rather than a full peace. But there’s been peace on
the border for quite a while. The two are actually getting
along I think all right. And that peace takes
the following form. Like, here it is. Did these have digital
rights management? Digital restrictions management,
if you’re Richard Stallman, things built into the
Kindle, to Spotify, Netflix? Yeah. Do we notice it? Not so much. If we wanted to grab
a clip of something and use it for a class, which
was classic fair use that DRM would prohibit and how
dare they, yeah we can. It’s kind of like a utility
now, like running water. You pay your Spotify. You get it ad free
and all the music you might want in the system. And in fact, quite
open to new artists that don’t have
to sign to a label to at least be accessible
by a URL in Spotify if not promoted on
their front page. This is what the armistice
looks like for copyright. And it’s worth asking, what
would an armistice look like in the space of
artificial intelligence where the roles are
largely reversed. And I’m not sure I know the
answer to that question. But one of the interesting
different variables here is, back in the
day in copyright, academics I think had
a clear role to play. They were themselves
copyright holders. They were learned expositors
on what was good and not or what was advisable
or not in copyright. I think of all the
times professor Lang testified before Congress,
served on advisory commissions. Professoret had a role in the
public interest calling it as we saw it. And that role, I think, is
diminishing in the parallel AI zone today. This is the CERN
particle accelerator. It’s the kind of thing that
naturally draws academics to it like a magnet. And in fact, you couldn’t
build it without the academics and without major government
funding in what is described as the public interest. There’s not– even Elon
Musk would be, like, pass. It’s a tunnel. He likes that. It is a loop. He likes that. It’s fast, it’s hyper, but
it’s not moving people. So this is the kind of thing
that quite by its nature made the case implicitly for
the academic establishment, for a .edu sector because we
were looking for exploring forms of knowledge that
might have a payoff later on, but were really for
their own sake and to come to a better
understanding of things. And early in the
days of the web, academics played
that role because we could scrape the web. We could go just write a
little bot to figure out what was going on, set up a
website to share our findings, or to let people build
something online. Less and less is that the case. More and more are we seeing
Netflix, Amazon, you name it, or Facebook– these
kind of walled gardens that have lots of benefits,
but aren’t easily interrogated. I can’t run a study
to see what Facebook has on me or on others. Or what is the most
shared topic on Facebook? I got to trust
Facebook for that, and that’s a huge difference. There’s a colleague of mine in
the Harvard computer science department. Here’s the house organ of
the Harvard newsletter saying in June of 2010,
congratulations Matt, you’ve been promoted
to full professor. All right, here is his
blog entry in November of 2010, why I’m
leaving Harvard. There’s one simple reason. I’m leaving academia. I love the work I’m
doing at Google. I get to hack all day,
working on problems, orders of magnitude larger
and more interesting than I can work on
at any university– sad trombone. That is really hard to beat. It’s worth more to me
than having prof in front of my name or a big office– I didn’t realize he
had a big office. I need to have a conversation
with somebody about that– or even permanent employment. Working at Google is
realizing the dream I’ve had of building big
systems my entire career. And indeed, Google Deep Mind,
an acquisition by Google, is acknowledged in London as
being the foremost deployer and innovator in
machine learning in artificial
intelligence in the world. They have 400 post-docs. I don’t know how we compete
with that as a computer science department. That is a shift of
activity in a field that I think is rife with
ethical questions for which the progress is going to
be made without academia along for the ride. And I think, structurally,
we need to contend with that and figure out how to
do something about that, and possibly one person at a
time we need to think about it. There were originally, in
the 18th century, the concept of a learned
profession, people who, through extraordinary
application of talent and patience and skill,
learned to apprentice to a very complex set of
rules and relationships that in turn, once they mastered
it, made them very powerful. And therefore, that’s
why, among other things, they owe a fiduciary duty
both to the people they serve individually and
to society at large. There’s a trust relationship. Here were the three original
learned professions, access to special powers
through complicated rules– divinity, law, and medicine. And then just at the
turn of the 18th century, we add a fourth learned
profession– surveying. And I’m thinking, maybe
it’s time for another learned profession
of the sort that are the technologists
swarming into these companies, building stuff. They’re going to be
the early warning system if they are thinking of
themselves as an alarm at all about what minefields they find
themselves in, for which there isn’t going to be an easy
answer from a compliance department or a lawyer. But if you consult
the lawyer, ideally that lawyer won’t just be
saying, if it isn’t prohibited, it’s permitted. The lawyer will be saying, huh,
is this wise, is this right, is this a risk we should take,
not just for the company, but on behalf of society? How do we take that
kind of thinking that we tell ourselves
and we try in law schools to inculcate and put that
into the engineering schools as well? There’s one Engineering Society,
I think from Canada, that had the practice of a special
iron ring for the engineers, originally said to be
from a bridge that fell, kind of a humility
exercise as you work. And maybe there should be some
similar kind of awareness other than move fast and break
things because the things that we’re building are very
functional, but impenetrable. This is 1984’s Promethean gift
for which none of us I think is capable of setting the
clock, but at least we know how to play the tape. And it calls to mind,
as it did with copyright as well, Clarke’s Third Law,
Arthur C Clarke’s Third Law– any sufficiently
advanced technology is indistinguishable from magic. He in turn was preparing
a derivative work from the observations more
blunt of Leugh Brackett. Witchcraft to the ignorant,
simple science to the learned. That’s the definition
of a learned profession in juxtaposition with the laity. And it is a way of
thinking of us as a polity, as humanity scattered
between, on the one hand– let’s see, we’ve got the nerds,
who they know how to work it– they are exempt from the
technological rules– the Luddites, who say I’m
exempt because technology doesn’t bear on me. That is a smaller and
smaller group of people. It’s not clear that just
going off of Facebook solves the problem such
as it is of Facebook. And of course, the rest
of us in the middle– and it’s a way of thinking both
about how to have those of us in the middle better able to
contend with these issues, to express preferences,
not just individually, but societally with
respect to them. And it’s also ultimately, I
think, a question of values. And that gets back
to the work of people like Professor Boyle,
Professor Lang, who are realizing that it’s normative. At the end of the day,
you stand for something and you make that case,
rather than just saying, I just build them. I don’t aim them sort of thing. And it’s really an invitation
to all of us, whatever our secret powers may be on the
tech side, on the last side, on the philosophy side, to
be engaged with these issues, to recognize the kind
of terrain ahead of us that will be a
highly contested one, and possibly not as
simple as it felt back in the 90s of that
binary between, hey, we just want to prepare
some derivative works, sing in the shower,
free the loofah. Thank you very much. [APPLAUSE] Thank you. We’re out of time. Well, you could take one. We left time for one
meandering question, that I’ll be like,
we’re out of time, but that’s a great question. Who wants to ask it
or offer a rebuttal? There was nothing like
this is New Zealand. There were sheep. [INAUDIBLE] Oh, yeah. Sorry, yes, sir. Just, if you don’t mind,
giving your thoughts on some of the more
radical solutions that people have proposed. I know some people have talked
about or at least voiced some possibilities of
establishing a property right with something
like personally identifiable information,
which is a little more in the vein of Europe
and [INAUDIBLE].. What are your
thoughts on that as opposed to your
armistice situation? The question was property
right as a radical solution. And I think there were a number
of us thinking about that, say, 25 years ago. I am very skeptical of that
because the property right– part of the essence of
property– is alienability. If you own it, you
can dispose of it. And of course, it’s
a bundle of sticks. You get rid of this. You get rid of that. Oh, here are your
friends’ permissions. Would you like to share your
friend’s data, your data? And we’re trying
to get something done on a Friday night. I want to see the hamster dance. I’m going to click OK. And that’s what
tells me that what makes property work is
that people can make rational informed choices on
the basis of economic concerns or moral ones about how they
want the thing they own to be disposed of, treated, used. And I think that the information
has to proliferated for us to be able to make
intelligent decisions. And the ones asking us will
have all the advantages in how to phrase the question. My kind of maxim is what
normally appears as a choice box– would you like to do
the following with your data in order to take this
personality quiz, to use a contemporaneous
example– really that should be
rephrased as, would you like to be screwed over, yes or no. And if it can be translated
to that question, the government should insist
that it be pre answered no and don’t even bother the
person with the question. The only questions to ask
are where people are really in a position to have a
reasonable, differing view about it– organ donation, for example. That seems to be
the kind of question that, yeah, people should
weigh in individually on. And I might think that
somebody else is making a bad decision for society. But I respect their
integrity to do it. That’s the rare question. To kind of mix our
metaphors– if somebody were to come up and
say Facebook style, I would like access
to your organs for the following purposes,
yes or no, it’s weird for you to be like, are you a doctor? Do you have a fiduciary duty? Are you Doctor Nick? What do I– Simpsons reference, sorry. That’s the kind of thing that
property would be [INAUDIBLE].. And I’d worry about Boris
Yeltsin’s Russia circa 1992 or ’93 when they privatized
the hell of everything, gave the citizens a share, and
people came along, Music Man, and bought it all up. That without hair is what a
property regime could get us. Thank you very much. [APPLAUSE]

One comment on “Lange Lecture 2018 | Jonathan L. Zittrain”

  1. Trevor Davis says:

    Very fine lecture – stimulating and entertaining. Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *