forked from w3ctag/privacy-principles
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
2334 lines (1934 loc) · 121 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<script src="https://www.w3.org/Tools/respec/respec-w3c" defer class="remove"></script>
<title>Privacy Principles</title>
<script class="remove">
// All config options at https://respec.org/docs/
var respecConfig = {
specStatus: 'ED',
group: 'tag',
format: 'markdown',
editors: [{
name: 'Robin Berjon',
company: 'Protocol Labs',
companyURL: 'https://protocol.ai/',
url: 'https://berjon.com/',
note: 'The New York Times until Sep 2022',
w3cid: 34327,
}, {
name: 'Jeffrey Yasskin',
company: 'Google',
companyURL: 'https://google.com/',
w3cid: 75192,
}],
github: 'w3ctag/privacy-principles',
latestVersion: 'https://www.w3.org/TR/privacy-principles/',
shortName: 'privacy-principles',
lint: {
'required-sections': false,
},
postProcess: [
() => {
// relabel each principle, and lose the numbering, delabel principle summary
for (let label of document.querySelectorAll('div.practice > a.marker > bdi, #bp-summary > ul > li > a.marker > bdi')) {
label.textContent = 'Principle';
}
// I hate markdown
for (let p of document.querySelectorAll('div.practice > p')) {
if (/^\s*$/.test(p.textContent)) p.remove();
}
for (let prac of document.querySelectorAll('div.practice.advisement')) {
prac.classList.remove('advisement');
prac.classList.add('principle');
}
let h2 = document.querySelector('#bp-summary h2');
h2.innerHTML = h2.innerHTML.replace('Best Practices Summary', 'Principles Summary');
}
],
localBiblio: {
'Addressing-Cyber-Harassment': {
title: 'Addressing cyber harassment: An overview of hate crimes in cyberspace',
authors: ['Danielle Keats Citron'],
publisher: 'Case Western Reserve Journal of Law, Technology & the Internet',
date: '2015',
href: 'https://scholarship.law.bu.edu/cgi/viewcontent.cgi?article=1634&context=faculty_scholarship'
},
'Anti-Tracking-Policy': {
title: 'Anti-Tracking Policy',
href: 'https://wiki.mozilla.org/Security/Anti_tracking_policy#Tracking_Definition',
publisher: 'Mozilla',
},
'Automating-Inequality': {
title: 'Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor',
href: 'https://us.macmillan.com/books/9781250074317/automatinginequality',
authors: ['Virginia Eubanks'],
publisher: 'Macmillan',
},
'Beyond-Individual': {
title: 'Privacy Beyond the Individual Level (in Modern Socio-Technical Perspectives on Privacy)',
href: 'https://doi.org/10.1007/978-3-030-82786-1_6',
authors: ['J.J. Suh', 'M.J. Metzger'],
publisher: 'Springer',
},
'Big-Data-Competition': {
title: 'Big Data and Competition Policy',
href: 'https://global.oup.com/academic/product/big-data-and-competition-policy-9780198788140?lang=en&cc=us',
authors: ['Maurice E. Stucke', 'Allen P. Grunes'],
publisher: 'Oxford University Press',
},
'Bit-By-Bit': {
title: 'Bit By Bit: Social Research in the Digital Age',
href: 'https://www.bitbybitbook.com/',
authors: ['Matt Salganik'],
publisher: 'Princeton University Press',
status: 'You can read this book free of charge, but Matt is an outstanding author and I encourage you to support him by buying his book!',
},
'Browser-Parties': {
title: 'Parties and browsers',
href: 'https://tess.oconnor.cx/2020/10/parties',
authors: ["Tess O'Connor"],
},
'CAT': {
title: 'Content Aggregation Technology (CAT)',
authors: ['Robin Berjon', 'Justin Heideman'],
href: 'https://nytimes.github.io/std-cat/',
},
'Contextual-Integrity': {
title: 'Privacy As Contextual Integrity',
authors: ['Helen Nissenbaum'],
href: 'https://digitalcommons.law.uw.edu/wlr/vol79/iss1/10/',
publisher: 'Washington Law Review',
},
'Confiding': {
title: 'Confiding in Con Men: U.S. Privacy Law, the GDPR, and Information Fiduciaries',
authors: ['Lindsey Barrett'],
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3354129',
},
'Consent-Lackeys': {
title: 'Publishers tell Google: We\'re not your consent lackeys',
authors: ['Rebecca Hill'],
href: 'https://www.theregister.com/2018/05/01/publishers_slam_google_ad_policy_gdpr_consent/',
publisher: 'The Register',
},
'Convention-108': {
title: 'Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data',
href: 'https://rm.coe.int/1680078b37',
publisher: 'Council of Europe',
},
'Dark-Patterns': {
title: 'Dark patterns: past, present, and future',
authors: ['Arvind Narayanan', 'Arunesh Mathur', 'Marshini Chetty', 'Mihir Kshirsagar'],
href: 'https://dl.acm.org/doi/10.1145/3397884',
publisher: 'ACM',
},
'Dark-Pattern-Dark': {
title: 'What Makes a Dark Pattern… Dark? Design Attributes, Normative Considerations, and Measurement Methods',
authors: ['Arunesh Mathur', 'Jonathan Mayer', 'Mihir Kshirsagar'],
href: 'https://arxiv.org/abs/2101.04843v1',
},
'Data-Futures-Glossary': {
title: 'Data Futures Lab Glossary',
authors: ['Mozilla Insights'],
href: 'https://foundation.mozilla.org/en/data-futures-lab/data-for-empowerment/data-futures-lab-glossary/',
publisher: 'Mozilla Foundation',
},
'Data-Minimization': {
title: 'Data Minimization in Web APIs',
authors: ['Daniel Appelquist'],
href: 'https://www.w3.org/2001/tag/doc/APIMinimization-20100605.html',
publisher: 'W3C TAG',
status: 'Draft Finding',
},
'De-identification-Privacy-Act': {
title: 'De-identification and the Privacy Act',
authors: ['Office of the Australian Information Commissioner'],
href: 'https://www.oaic.gov.au/privacy/guidance-and-advice/de-identification-and-the-privacy-act',
publisher: 'Australian Government',
},
'Digital-Assistant-Trust': {
title: "Facebook's new digital assistant 'M' will need to earn your trust",
authors: ['Neil Richards', 'Woodrow Hartzog'],
href: 'https://www.theguardian.com/technology/2015/sep/09/what-should-we-demand-of-facebooks-new-digital-assistant',
publisher: 'The Guardian',
},
'Digital-Market-Manipulation': {
title: 'Digital Market Manipulation',
authors: ['Ryan Calo'],
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2309703',
publisher: 'George Washington Law Review',
},
'Eurobarometer-443': {
title: 'Eurobarometer 443: e-Privacy',
authors: ['European Commission'],
href: 'https://ec.europa.eu/COMMFrontOffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/FLASH/surveyKy/2124',
},
'Fiduciary-UA': {
title: 'The Fiduciary Duties of User Agents',
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827421',
authors: ['Robin Berjon'],
},
'FIP': {
title: 'Fair Information Practices: A Basic History',
href: 'http://bobgellman.com/rg-docs/rg-FIPShistory.pdf',
authors: ['Bob Gellman'],
status: '(PDF)',
},
'For-Everyone': {
title: 'This Is For Everyone',
href: 'https://twitter.com/timberners_lee/status/228960085672599552',
authors: ['Tim Berners-Lee'],
status: 'Statement made to the London 2012 Olympics opening ceremony',
},
'GDPR': {
title: 'General Data Protection Regulations (GDPR) / Regulation (EU) 2016/679',
href: 'https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN',
authors: ['European Parliament and Council of European Union'],
},
'GKC-Privacy': {
title: 'Governing Privacy in Knowledge Commons',
authors: ['Madelyn Rose Sanfilippo', 'Brett M. Frischmann', 'Katherine J. Strandburg'],
href: 'https://www.cambridge.org/core/books/governing-privacy-in-knowledge-commons/FA569455669E2CECA25DF0244C62C1A1',
publisher: 'Cambridge University Press',
},
'GPC': {
title: 'Global Privacy Control (GPC)',
authors: ['Robin Berjon', 'Sebastian Zimmeck', 'Ashkan Soltani', 'David Harbage', 'Peter Snyder'],
href: 'https://globalprivacycontrol.github.io/gpc-spec/',
publisher: 'W3C',
},
'IAD': {
title: 'Understanding Institutional Diversity',
authors: ['Elinor Ostrom'],
href: 'https://press.princeton.edu/books/paperback/9780691122380/understanding-institutional-diversity',
publisher: 'Princeton University Press',
},
'Individual-Group-Privacy': {
title: 'From Individual to Group Privacy in Big Data Analytics',
authors: ['Brent Mittelstadt'],
href: 'https://link.springer.com/article/10.1007/s13347-017-0253-7',
publisher: 'Philosophy & Technology',
},
'Industry-Unbound': {
title: 'Industry Unbound: the inside story of privacy, data, and corporate power',
authors: ['Ari Ezra Waldman'],
href: 'https://www.cambridge.org/core/books/industry-unbound/787989F90DBFC08E47546178A7AB04F7',
publisher: 'Cambridge University Press',
},
'Internet-of-Garbage': {
title: 'The Internet of Garbage',
authors: ['Sarah Jeong'],
publisher: 'The Verge',
date: '2018',
href: 'https://www.theverge.com/2018/8/28/17777330/internet-of-garbage-book-sarah-jeong-online-harassment'
},
'Lost-In-Crowd': {
title: 'Why You Can No Longer Get Lost in the Crowd',
authors: ['Woodrow Hartzog', 'Evan Selinger'],
href: 'https://www.nytimes.com/2019/04/17/opinion/data-privacy.html',
publisher: 'The New York Times',
},
'Nav-Tracking': {
title: 'Navigational-Tracking Mitigations',
authors: ['Pete Snyder', 'Jeffrey Yasskin'],
href: 'https://privacycg.github.io/nav-tracking-mitigations/',
publisher: 'W3C',
},
'NIST-800-63A': {
title: 'Digital Identity Guidelines: Enrollment and Identity Proofing Requirements',
href: 'https://pages.nist.gov/800-63-3/sp800-63a.html',
publisher: 'NIST',
authors: ['Paul A. Grassi', 'James L. Fenton', 'Naomi B. Lefkovitz', 'Jamie M. Danker', 'Yee-Yin Choong', 'Kristen K. Greene', 'Mary F. Theofanos'],
date: 'March 2020'
},
'NYT-Privacy': {
title: 'How The New York Times Thinks About Your Privacy',
author: ['Robin Berjon'],
href: 'https://open.nytimes.com/how-the-new-york-times-thinks-about-your-privacy-bc07d2171531',
publisher: 'NYT Open',
},
'Obfuscation': {
title: 'Obfuscation: A User\'s Guide for Privacy and Protest',
authors: ['Finn Brunton', 'Helen Nissenbaum'],
href: 'https://www.penguinrandomhouse.com/books/657301/obfuscation-by-finn-brunton-and-helen-nissenbaum/',
publisher: 'Penguin Random House',
},
'Obscurity-By-Design': {
title: 'Obscurity by Design',
authors: ['Woodrow Hartzog', 'Frederic Stutzman'],
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2284583',
},
'OECD-Guidelines': {
title: 'OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data',
href: 'https://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm',
publisher: 'OECD',
},
'PEN-Harassment': {
href: 'https://onlineharassmentfieldmanual.pen.org/defining-online-harassment-a-glossary-of-terms/',
title: 'Online Harassment Field Manual',
publisher: 'PEN America',
},
'PEW-Harassment': {
title: 'The State of Online Harassment',
publisher: 'Pew Research Center',
date: 'January 2021',
href: 'https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/'
},
'Phone-On-Feminism': {
title: 'This is your phone on feminism',
href: 'https://conversationalist.org/2019/09/13/feminism-explains-our-toxic-relationships-with-our-smartphones/',
authors: ['Maria Farrell'],
publisher: 'The Conversationalist',
rawDate: '2019-09-13',
},
'Privacy-Behavior': {
title: 'Privacy and Human Behavior in the Age of Information',
authors: ['Alessandro Acquisti', 'Laura Brandimarte', 'George Loewenstein'],
href: 'https://www.heinz.cmu.edu/~acquisti/papers/AcquistiBrandimarteLoewenstein-S-2015.pdf',
publisher: 'Science',
},
'Privacy-Concerned': {
title: 'Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information',
authors: ['Brooke Auxier', 'Lee Rainie', 'Monica Anderson', 'Andrew Perrin', 'Madhu Kumar', 'Erica Turner'],
href: 'https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/',
publisher: 'Pew Research Center',
},
'Privacy-Contested': {
title: 'Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy',
authors: ['Deirdre K. Mulligan', 'Colin Koopman', 'Nick Doty'],
href: 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124066/',
publisher: 'Philosophical Transacions A',
},
'Privacy-Harms': {
title: 'Privacy Harms',
authors: ['Danielle Keats Citron', 'Daniel Solove'],
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782222',
},
'Privacy-In-Context': {
title: 'Privacy in Context',
authors: ['Helen Nissenbaum'],
href: 'https://www.sup.org/books/title/?id=8862',
publisher: 'SUP',
},
'Privacy-Is-Power': {
title: 'Privacy Is Power',
authors: ['Carissa Véliz'],
href: 'https://www.penguin.com.au/books/privacy-is-power-9781787634046',
publisher: 'Bantam Press',
},
'Privacy-Threat': {
title: 'Target Privacy Threat Model',
href: 'https://w3cping.github.io/privacy-threat-model/',
authors: ['Jeffrey Yasskin', 'Tom Lowenthal'],
publisher: 'W3C PING',
},
'PSL-Problems': {
authors: ['Ryan Sleevi'],
href: 'https://github.com/sleevi/psl-problems',
title: 'Public Suffix List Problems'
},
'Relational-Turn': {
title: 'A Relational Turn for Data Protection?',
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3745973&s=09',
authors: ['Neil Richards', 'Woodrow Hartzog'],
},
'Seeing-Like-A-State': {
title: 'Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed',
href: 'https://bookshop.org/books/seeing-like-a-state-how-certain-schemes-to-improve-the-human-condition-have-failed/9780300246759',
authors: ['James C. Scott'],
},
'SILVERPUSH': {
title: 'How TV ads silently ping commands to phones: Sneaky SilverPush code reverse-engineered',
href: 'https://www.theregister.com/2015/11/20/silverpush_soundwave_ad_tracker/',
publisher: 'The Register',
authors: ['Iain Thomson']
},
'Strava-Debacle': {
title: 'The Latest Data Privacy Debacle',
authors: ['Zeynep Tufekci'],
href: 'https://www.nytimes.com/2018/01/30/opinion/strava-privacy.html',
publisher: 'The New York Times',
},
'Strava-Reveal-Military': {
title: 'Strava Fitness App Can Reveal Military Sites, Analysts Say',
authors: ['Richard Pérez-Peña', 'Matthew Rosenberg'],
href: 'https://www.nytimes.com/2018/01/29/world/middleeast/strava-heat-map.html',
publisher: 'The New York Times',
},
'Surveillance-Capitalism': {
title: 'The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power',
authors: ['Shoshana Zuboff'],
href: 'https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/',
publisher: 'Hachette Public Affairs',
},
'Taking-Trust-Seriously': {
title: 'Taking Trust Seriously in Privacy Law',
href: 'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2655719',
authors: ['Neil Richards', 'Woodrow Hartzog'],
},
'Twitter-Developer-Policy': {
title: 'Developer Policy - Twitter Developers',
href: 'https://developer.twitter.com/en/developer-terms/policy',
publisher: 'Twitter'
},
'Tracking-Prevention-Policy': {
title: 'Tracking Prevention Policy',
href: 'https://webkit.org/tracking-prevention-policy/',
publisher: 'Apple',
},
'Understanding-Privacy': {
title: 'Understanding Privacy',
authors: ['Daniel Solove'],
href: 'https://www.hup.harvard.edu/catalog.php?isbn=9780674035072',
publisher: 'Harvard University Press',
},
'Why-Privacy': {
title: 'Why Privacy Matter',
authors: ['Neil Richards'],
href: 'https://global.oup.com/academic/product/why-privacy-matters-9780190939045?cc=us&lang=en&',
publisher: 'Oxford University Press',
},
'Records-Computers-Rights': {
title: 'Records, Computers and the Rights of Citizens',
publisher: 'U.S. Department of Health, Education & Welfare',
href: 'https://archive.epic.org/privacy/hew1973report/'
},
'Relational-Governance': {
title: 'A Relational Theory of Data Governance',
authors: ['Salomé Viljoen'],
href: 'https://www.yalelawjournal.org/feature/a-relational-theory-of-data-governance',
publisher: 'Yale Law Journal',
}
},
};
</script>
<style>
.principle {
border: .5em;
border-color: cornflowerblue;
border-style: none none none double;
background: transparent;
padding: .5em;
page-break-inside: avoid;
margin: 1em auto;
}
.principle > .marker {
color: cornflowerblue;
font-weight: bold;
}
q {
font-style: italic;
}
</style>
</head>
<body data-cite="html indexedDB service-workers fingerprinting-guidance url">
<section id="abstract">
Privacy is an essential part of the Web ([[?ETHICAL-WEB]]). This document provides definitions
for privacy and related concepts that are applicable worldwide. It also provides a set of privacy
principles that should guide the development of the Web as a trustworthy platform. People using
the Web would benefit from a stronger relationship between technology and policy, and this
document is written to work with both.
</section>
<section id="sotd">
This document is a Draft Finding of the [Technical Architecture Group (TAG)](https://www.w3.org/2001/tag/)
which we are releasing as a Draft Note. The intent is for this document to become a W3C Statement.
It was prepared by the [Web Privacy Principles Task Force](https://github.com/w3ctag/privacy-principles),
which was convened by the TAG. Publication as a Draft Finding or Draft Note does not imply
endorsement by the TAG or by the W3C Membership.
This draft <strong>does not yet</strong> reflect the consensus of the TAG or the task force and may
be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this
document as anything other than a work in progress.
It will continue to evolve and the task force will issue updates as often as needed. At the
conclusion of the task force, the TAG intends to adopt this document as a Finding.
</section>
<section class="introductory">
## How This Document Fits In
This document elaborates on the [privacy principle](https://www.w3.org/2001/tag/doc/ethical-web-principles/#privacy) in the [[[Ethical-Web]]]. It doesn't
address how to balance the different principles in that document if they come into
conflict.
Privacy is covered by legal frameworks and this document recognises that existing data
protection laws take precedence for legal matters. However, because the Web is global, we
benefit from having shared concepts to guide its evolution as a system built for the people using it
([[?RFC8890]]). A clear and well-defined view of privacy on the Web, informed by research,
can hopefully help all the Web's participants in different legal regimes. Our shared
understanding is that the law is a floor, not a ceiling.
</section>
<section class="introductory">
## Audience for this Document
The primary audiences for this document are
* browser developers,
* authors of web specifications,
* reviewers of web specifications, and
* web developers
Other audiences include:
* policy makers, and
* operators of privacy-related services.
This document is intended to help its audiences address privacy concerns as early as possible in the life
cycle of a new Web standard or feature, or in the development of Web products. Beginning with privacy in mind will help avoid the need to
add special cases to comply with specific requirements one law or one jurisdiction at a time, later, or
to build systems that turn out to be unacceptable to users.
Because this document guides privacy reviews of new standards, designers should consult it early in the design to make sure their feature passes the review smoothly.
</section>
# An Introduction to Privacy on the Web {#intro}
The Web is for everyone ([[?For-Everyone]]). It is "<i>a platform that helps people and provides a
net positive social benefit</i>" ([[?ETHICAL-WEB]], [[?design-principles]]). One of the ways in which the
Web serves people is by protecting them in the face of asymmetries of power, and this includes
establishing and enforcing rules to govern the power of data.
The Web is a social and technical system made up of [=information flows=]. Because this document
is specifically about [=privacy=] as it applies to the Web, it focuses on privacy with respect to
information flows. Our goal is not to cover all privacy issues, but rather to provide enough
background to support the Web community in making informed decisions about privacy and to weave
privacy into the architecture of the Web. Few architectural principles are absolute, and privacy is
no exception: privacy can come into tension with other desirable properties of an ethical
architecture, and when that happens the Web community will have to work together to strike the right
balance.
Information is power. It can be used to predict and to influence people, as well as to design online
spaces that control people's behaviour. The collection and [=processing=] of information in greater
volume, with greater precision and reliability, with increasing interoperability across a growing
variety of data types, and at intensifying speed is leading to an unprecedented concentration of power
that threatens private and public liberties.
What's more, automation and the increasing computerisation of all aspects of our lives
both increase the power of information and decrease the cost of a number of intrusive
behaviours that would be more easily kept in check if the perpetrator had to be in the same room as
the victim.
These asymmetries of information and of automation create commanding asymmetries of power.
<dfn data-lt="governance">Data governance</dfn> is the system of [=principles=] that regulate [=information flows=]. When
[=people=] are involved in [=information flows=], [=data governance=] determines how
these [=principles=] constrain and distribute the power of information between different [=actors=].
The <dfn>principles</dfn> describe the way in which different [=actors=] may, must,
or must not produce or [=process=] flows of information from, to, or about other [=actors=]
([[?GKC-Privacy]], [[?IAD]]).
Typically, <dfn>actors</dfn> are [=people=] but they can also be collective endeavours such as
companies, associations, or political parties (eg. in the case of espionage), which are known as
[=parties=]. It is important to keep in mind that not all people are equal in how they can resist
the imposition of unfair [=principles=]: some [=people=] are more [=vulnerable=] and therefore in greater
need of protection. The focus of this document is on the impact that this power differential can
have against people, but it can also impact other [=actors=], such as companies or governments.
[=Principles=] vary from [=context=] to [=context=] ([[?Understanding-Privacy]], [[?Contextual-Integrity]]): people
have different expectations of [=privacy=] at work, at a café, or at home for instance. Understanding and
evaluating a privacy situation is best done by clearly identifying:
* Its [=actors=], which include the subject of the information as well as the sender and the recipient
of the [=information flow=]. (Note that recipients might not always want to be recipients.)
* The specific type of data in the [=information flow=].
* The [=principles=] that are in use in this specific context.
It is important to keep in mind that there are <em>always</em> privacy [=principles=] and that all
of them imply different power dynamics. Some [=principles=] may be more permissive, but that does
not render them neutral — it merely indicates that they are supportive of the power dynamic that
emerges from permissive [=processing=]. We must therefore determine which [=principles=]
best align with ethical Web values in Web [=contexts=] ([[?ETHICAL-WEB]], [[?Why-Privacy]]).
<dfn data-lt='information'>Information flows</dfn> as understood in this document are information
exchanged or processed by [=actors=]. The information itself need not necessarily be
[=personal data=]. Disruptive or interruptive information flowing <em>to</em> a
person is in scope, as is [=de-identified=] [=data=] that can be used to manipulate people or that
was extracted by observing people's behaviour on someone else's website.
[=Information flows=] need to be understood from more than one perspective: there is the flow of information
<em>about</em> a person (the subject) being processed or transmitted to any other party, and there is
the flow of information <em>towards</em> a person (the recipient). Recipients can have their privacy violated in multiple ways such as
unexpected shocking images, loud noises while one intends to sleep, manipulative information,
interruptive messages when a person's focus is on something else, or harassment when they seek
social interactions.
On the Web, [=information flows=] may involve a wide variety of [=parties=] that are not always
recognizable or obvious to a user within a particular interaction. Visiting a website may involve
the parties that operate that site and its functionality, but also parties with network access,
which may include: Internet service providers; other network operators; local institutions providing
a network connection including schools, libraries or universities; government intelligence services;
malicious hackers who have gained access to the network or the systems of any of the other parties.
High-level threats including [=surveillance=] may be pursued by these parties. Pervasive monitoring,
a form of large-scale, indiscriminate surveillance, is a known attack on the privacy of users of the
Internet and the Web [[RFC7258]].
Information flows may also involve other people — for example, other users of a site —
which could include friends, family members, teachers, strangers, or government officials. Some
threats to privacy, including both [=disclosure=] and harassment, may be particular to the other
people involved in the information flow.
## Individual Autonomy {#autonomy}
A [=person=]'s <dfn data-lt="autonomous">autonomy</dfn> is their ability to make decisions of their own volition,
without undue influence from other [=parties=]. People have limited intellectual resources and
time with which to weigh decisions, and by necessity rely on shortcuts when making
decisions. This makes their preferences, including privacy preferences, malleable and susceptible to
manipulation ([[?Privacy-Behavior]], [[?Digital-Market-Manipulation]]). A [=person=]'s [=autonomy=] is enhanced by a
system or device when that system offers a shortcut that aligns more with what that [=person=] would
have decided given arbitrary amounts of time and relatively unlimited intellectual ability;
and [=autonomy=] is decreased when a similar shortcut goes against decisions made under such
ideal conditions.
Affordances and interactions that decrease [=autonomy=] are known as <dfn>dark patterns</dfn>.
A [=dark pattern=] does not have to be intentional ([[?Dark-Patterns]], [[?Dark-Pattern-Dark]]).
Because we are all subject to motivated reasoning, the design of defaults and affordances
that may impact [=autonomy=] should be the subject of independent scrutiny.
Given the large volume of potential [=data=]-related decisions in today's data economy,
complete informational self-determination is impossible. This fact, however, should not be
confused with the idea that privacy is dead. Studies show that [=people=] remain concerned over how
their [=data=] is [=processed=], feeling powerless and like they have lost agency
([[?Privacy-Concerned]]). Careful design of our technological infrastructure can ensure that
people's [=autonomy=] with respect to their own [=data=] is enhanced through [=appropriate=]
defaults and choice architectures.
### Opt-in, Consent, Opt-out, Global Controls {#opt-in-out}
<aside class="issue">Broaden this from [=processing=] to general interactions with systems.</aside>
Different procedural mechanisms exist to enable [=people=] to control how they interact
with systems in the world. Mechanisms that increase the number of [=purposes=] for which
their [=data=] is being [=processed=] are referred to as [=opt-in=] or
<dfn data-lt="opt in|opt-in">consent</dfn>; mechanisms that decrease this number of [=purposes=] are known as
<dfn data-lt="opt out">opt-out</dfn>.
When deployed thoughtfully, these mechanisms can enhance [=people=]'s [=autonomy=]. Often,
however, they are used as a way to avoid putting in the difficult work of deciding which
types of [=processing=] are [=appropriate=] and which are not, offloading [=privacy labour=]
to the people using a system.
In specific cases, [=people=] should be able to [=consent=] to more
sensitive [=purposes=], such as having their [=identity=] recognised across contexts or
their reading history shared with a company. The burden of proof on ensuring that informed
[=consent=] has been obtained needs to be very high in this case.
[=Consent=] is comparable to the
general problem of permissions on the Web platform. In the same way that it should be clear when a
given device capability is in use (eg. you are providing geolocation or camera access), sharing
data should be set up in such a way that it requires deliberate, specific action from the [=person=]
(eg. triggering a form control that is not on a modal dialog) and if that [=consent=] is persistent,
there should be a vivid indicator that data is being transmitted shown at all times, in such a way
that the person can easily switch it off. In general, providing [=consent=] should be rare,
difficult, highly intentional, and temporary.
When an [=opt-out=] mechanism exists, it should preferably be complemented by a
<dfn>global opt-out</dfn> mechanism. The function of a [=global opt-out=] mechanism is to
rectify the <dfn class="export">automation asymmetry</dfn> whereby service providers can automate
[=data processing=] but [=people=] have to take manual action to prevent it. A good example of a
[=global opt-out=] mechanism is the <em>Global Privacy Control</em> [[?GPC]].
Conceptually, a [=global opt-out=] mechanism is an automaton operating as part of the
[=user agent=], which is to say that it is equivalent to a robot that would carry out a
[=person=]'s bidding by pressing an [=opt-out=] button with every interaction that the
[=person=] has with a site, or more generally conveys an expression of the [=person=]'s
rights in a relevant jurisdiction. (For instance, under [[?GDPR]], the [=person=] may be
conveying objections to [=processing=] based on legitimate interest or the withdrawal of
[=consent=] to specific [=purposes=].) It should be noted that, since a [=global opt-out=]
signal is reaffirmed automatically with every interaction, it will take precedence
in terms of specificity over any general obtention of [=consent=] by a site,
and only superseded by specific [=consent=] obtained through a deliberate action taken by
the user with the intent of overriding their global opt-out.
### Privacy Labour {#privacy-labour}
<dfn data-lt="privacy labor|labour|labor">Privacy labour</dfn> is the practice of having a [=person=] carry out
the work of ensuring [=data processing=] of which they are the subject or recipient is
[=appropriate=], instead of having the [=parties=] be responsible for that work.
Data systems that are based on asking [=people=] for their [=consent=] tend to increase
[=privacy labour=].
More generally, implementations of [=privacy=] are often dominated by self-governing approaches that
offload [=labour=] to [=people=]. This is notably true of the regimes descended from the
<dfn data-lt="FIPs">Fair Information Practices</dfn> ([=FIPs=]), a loose set of principles initially
elaborated in the 1970s in support of individual [=autonomy=] in the face of growing concerns with databases. The
[=FIPs=] generally assume that there is sufficiently little [=data processing=] taking place that any
[=person=] will be able to carry out sufficient diligence to enable [=autonomy=] in their
decision-making. Since they entirely offload the [=privacy labour=]
to people and assume perfect, unlimited [=autonomy=], the [=FIPs=] do not forbid specific
types of [=data processing=] but only place them under different procedural requirements.
Such an approach is [=appropriate=] for [=parties=] that are processing data in the 1970s.
One notable issue with procedural, self-governing approaches to privacy is that they tend to have the same
requirements in situations where people find themselves in a significant asymmetry of
power with a [=party=] — for instance a [=person=] using an essential service provided by a
monopolistic platform — and those where people and [=parties=] are very much on equal
footing, or even where the [=person=] may have greater power, as is the case with small
businesses operating in a competitive environment. It further does not consider cases in
which one [=party=] may coerce other [=parties=] into facilitating its [=inappropriate=]
practices, as is often the case with dominant players in advertising or
in content aggregation ([[?Consent-Lackeys]], [[?CAT]]).
Reference to the [=FIPs=] survives to this day. They are often referenced as "<i>transparency
and choice</i>", which, in today's digital environment, is often an indication that
[=inappropriate=] [=processing=] is being described.
## Collective Governance {#collective}
Privacy [=principles=] are socially negotiated and the definition of [=privacy=] is essentially
contested ([[?Privacy-Contested]]). This makes privacy a problem of collective action ([[?GKC-Privacy]]).
Group-level [=data processing=] may impact populations or individuals, including in
ways that [=people=] could not control even under the optimistic assumptions of [=consent=].
For example, based on group-level analysis, a company may know that <var>site.example</var>
is predominantly visited by [=people=] of a given race or gender, and decide not to run its
job ads there. Visitors to that page are implicitly having their [=data=] processed in
[=inappropriate=] ways, with no way to discover the discrimination or seek relief
([[?Relational-Governance]]).
What we consider is therefore not just the relation between the [=people=] who share data
and the [=parties=] that invite that disclosure ([[?Relational-Turn]]), but also between the [=people=]
who may find themselves categorised indirectly as part of a group even without sharing data. One key
understanding here is that such relations may persist even when data is [=de-identified=]. What's
more, such categorisation of people, voluntary or not, changes the way in which the world operates.
This can produce self-reinforcing loops that can damage both individuals and
groups ([[?Seeing-Like-A-State]]).
In general, collective issues in [=data=] require collective solutions. Web standards help with
[=data governance=] by defining structural controls
in [=user agents=] and establishing or delegating to institutions that can handle issues of [=privacy=].
[=Governance=] will often struggle to achieve its goals if it works primarily by
increasing <em>individual</em> control instead of acting collectively.
Collecting data at large scales can have significant pro-social outcomes. Problems tend to
emerge when [=actors=] [=process=] [=data=]
for collective benefit and for [=self-dealing=] [=purposes=] at the same time.
The [=self-dealing=] [=purposes=] are often justified as bankrolling the pro-social outcomes
but this requires collective oversight to be [=appropriate=].
### Group Privacy {#group-privacy}
There are different ways for [=people=] to become members of a group. Either they can join it
deliberately, making it a self-constituted group such as when joining a club, or they can be
classified into it by an external party, typically a bureaucracy or its computerised equivalent
([[?Beyond-Individual]]). In the latter case, [=people=] may not be aware that they are being
grouped together, and the definition of the group may not be intelligible (for instance if it is
created from opaque machine learning techniques).
Protecting group privacy can take place at two different levels. The existence of a group or at
least its activities may need to be protected even in cases in which its members are guaranteed to
remain anonymous. We refer to this as "group privacy." Conversely, [=People=] may wish to protect
knowledge that they are members of the group even though the existence of the group and its actions
may be well known (eg. membership in a dissidents movement under authoritarian rule), which we call
"membership privacy". An example [=privacy violation=] for the former case
is the fitness app Strava that did not reveal individual behaviour or identity but published heat
maps of popular running routes. In doing so, it revealed secret US bases around which military
personnel took frequent runs ([[?Strava-Debacle]], [[?Strava-Reveal-Military]]).
When [=people=] do not know that they are members of a group, when they cannot easily find other
members of the group so as to advocate for their rights together, or when they cannot easily
understand why they are being categorised into a given group, their ability to protect themselves
through self-governing approaches to privacy is largely eliminated.
One common problem in group privacy is when the actions of one member of a group reveals information
that other members would prefer were not shared in this way (or at all). For instance, one person
may publish a picture of an event in which they are featured alongside others while the other people
captured in the same picture would prefer their participation not to be disclosed. Another example
of such issues are sites that enable people to upload their contacts: the person performing the
upload might be more open to disclosing their social networks than the people they are connected to
are. Such issues do not necessarily admit simple, straightforward solutions but they need to be
carefully considered by people building websites.
### Transparency and Research {#transparency}
While transparency rarely helps enough to inform the individual choices that [=people=] may
make or in increasing their [=autonomy=], it plays a critical role in letting
researchers and reporters inform our
collective decision-making about privacy [=principles=]. This consideration extends the
TAG's resolution on a [Strong and Secure Web Platform](https://www.w3.org/blog/2015/11/strong-web-platform-statement/)
to ensure that "<i>broad testing and audit continues to be possible</i>" where
[=information flows=] and automated decisions are involved.
Such transparency can only function if there are strong rights
of access to data (including data
derived from one's personal data) as well as mechanisms to explain the outcomes of automated
decisions.
## People's Agents {#user-agents}
The <dfn>user agent</dfn> acts as an intermediary between a [=person=] (its [=user=]) and the web.
[=User agents=] implement, to the extent possible, the [=principles=] that collective governance
establishes in favour of individuals. They seek to prevent the creation of asymmetries of
information, and serve their [=user=] by providing them with automation to rectify
[=automation asymmetries=]. Where possible, they protect their [=user=] from receiving
intrusive messages.
The [=user agent=] is expected to align fully with the [=person=] using it and operate exclusively
in that [=person=]'s interest. It is <em>not</em> the [=first party=]. The [=user agent=] serves the
[=person=] as a <dfn>trustworthy agent</dfn>:
it always puts that [=person=]'s interest first. In some occasions, this can mean protecting
that [=person=] from themselves by preventing them from carrying out a dangerous decision,
or by slowing down the person in their decision. For example, the
[=user agent=] will make it difficult for that [=person=] to connect to a site if it can't verify
that the site is authentic. It will check that that [=person=] really intends to expose a
sensitive device to a page. It will prevent that [=person=] from consenting to the permanent
monitoring of their behaviour. Its <dfn class="export">user agent duties</dfn> include
([[?Taking-Trust-Seriously]]):
<dl>
<dt><dfn class="export">Duty of Protection</dfn></dt>
<dd>
Protection requires [=user agents=] to actively protect their [=user=]'s data, beyond
simple security measures. It is insufficient to just encrypt at rest and in transit,
but the [=user agent=] must also limit retention, help ensure that only strictly
necessary data is collected, and require guarantees from any [=party=] that the user agent can
reasonably be aware that it is shared to.
</dd>
<dt><dfn class="export">Duty of Discretion</dfn></dt>
<dd>
Discretion requires the [=user agent=] to make best efforts to enforce
[=principles=] by taking care in the ways it discloses the [=personal data=]
that it manages. Discretion is not
confidentiality or secrecy: trust
can be preserved even when the [=user agent=] shares some [=personal data=], so long as
it is done in an [=appropriately=] discreet manner.
</dd>
<dt><dfn class="export">Duty of Honesty</dfn></dt>
<dd>
Honesty requires that the [=user agent=] try to give its [=user=]
information of which the [=user agent=] can reasonably be aware, that is relevant to
them and that will increase their
autonomy, as long as they can understand it and there's an appropriate
time. This is almost never when the [=person=] is trying to do something else such as
read a page or activate a feature. The duty of honesty goes well beyond that of
transparency that is often included in older privacy regimes. Unlike transparency, honesty
can't hide relevant information in complex legal notices and it can't rely on
very short summaries provided in a consent dialog.
If the person has provided [=consent=] to [=processing=] of their [=personal data=],
the [=user agent=] should inform the [=person=] of ongoing [=processing=], with a
level of obviousness that is proportional to the reasonably foreseeable impact of the processing.
</dd>
<dt><dfn class="export">Duty of Loyalty</dfn></dt>
<dd>
Because the [=user agent=] is a [=trustworthy agent=], it is held to be loyal to the
[=person=] using it in all situations, including in preference to the [=user agent=]'s implementer.
When a [=user agent=] carries out [=processing=] that is not in the [=person=]'s
interest but instead benefits another [=actor=] (such as the user agent's implementer) that behaviour is
known as <dfn>self-dealing</dfn>. Behaviour can be [=self-dealing=] even if it is done at the
same time as [=processing=] that is in the [=person=]'s interest, what matters is that it
potentially conflicts with that [=person=]'s interest. [=Self-dealing=] is always
[=inappropriate=]. Loyalty is the avoidance of [=self-dealing=].
</dd>
</dl>
These duties ensure the [=user agent=] will <em>care</em> for its [=user=]. In academic
research, this relationship with a [=trustworthy agent=] is often described as "fiduciary"
[[?Fiduciary-UA]]. Some jurisdictions may have a distinct legal meaning for "fiduciary."
Many of the [=principles=] described in the rest of this document extend the [=user agent=]'s duties and
make them more precise.
## Incorporating Different Privacy Principles {#balancing}
While privacy principles are designed to work together and support each other,
occasionally a proposal to improve how a system follows one privacy principle may reduce
how well it follows another principle.
<div class="practice">
<p><span class="practicelab" id="principle-pareto-frontier">When confronted with an
apparent tradeoff, first look for ways to improve all principles at once.</span></p>
Given any initial design that doesn't perfectly satisfy all principles, there are usually
some other designs that improve the situation for some principles without sacrificing
anything about the other principles. Work to find those designs.
Another way to say this is to look for [Pareto
improvements](https://en.wikipedia.org/wiki/Pareto_efficiency) before starting to trade
off between principles.
</div>
Once one is choosing between different designs at the Pareto frontier, the choice of which
privacy principles to prefer is complex and depends heavily on the details of each
particular situation. <span class="note">Note that people's privacy can also be in tension
with non-privacy concerns. As discussed in the [[[Ethical-Web]]], "it is important to
consider the context in which a particular technology is being applied, the expected
audience(s) for the technology, who the technology benefits and who it may disadvantage,
and any power dynamics involved".</span> Despite this complexity, there is a basic ground
rule to follow:
<div class="practice">
<p><span class="practicelab" id="principle-limited-collection-for-safety">If a service
needs to collect extra data from its users in order to protect those or other users, it
must take extra technical and legal measures to ensure that this data can't be then used
for other purposes, like to grow the service.</span></p>
This is a special case of the more general principle that data should not be used for more
[=purposes=] than the data's subjects understood it was being collected for.
A service should explain how it uses people's data to protect them and other people, and
how it might additionally use someone's data if it believes that person has broken the
rules.
</div>
<aside class="example" id="example-technical-legal-measures" title="Technical and Legal Measures">
A site might segregate the data it collects for safety reasons from its business data by:
1. Specifying in its privacy statement that these types of data are kept separate and
implementing policies and procedures to ensure the data is stored separately. (legal
and procedural/compliance measures)
1. Using multi-party computation to ensure that its business side can't learn the
sensitive safety data unless both its safety side and a trusted independent third party
collude. (A technical measure)
1. Hiring a trusted auditor to publicly check that the data is effectively segregated. (a
compliance measure)
</aside>
It is attractive to say that if someone violates the norms of a service they're using,
then they sacrifice a proportionate amount of their privacy protections, but
1. Often the service can only prevent the norm violation by also collecting data from
innocent users. This extra collection is not always [=appropriate=], especially if it
allows pervasive monitoring ([[RFC7258]], [[RFC7687]]).
1. If a service operator wants to collect some extra data, it can be tempting for them to
define norms and proportionality that allow them to do so.
The following examples illustrate some of the tensions:
<aside class="example" id="example-sockpuppets" title="Sockpuppets">
A person might want to sign up many accounts ("sockpuppets") or disguise the affiliation
of individual-owned accounts
("[astroturfing](https://en.wikipedia.org/wiki/Astroturfing)") on a service in order to
trick other people into thinking a belief has more support than it really has. This
violates the other people's rights to be free from manipulation.
On the other hand, identifying everyone with enough detail to detect these cases tends to
violate their rights to be free from [=surveillance=] and [=correlation=].
</aside>
<aside class="example" id="example-children" title="Children's Services">
Children using a service (or their guardians) may want to ensure their interaction with
that service is only visible to other children. To accomplish this, the service would need
to check all of its users' ages.
While a service can ask its users to self-assert that they are under the specified age
(without verifying those assertions), or it might rely on trusted institutions (like
schools) to verify people's ages, to verify people's ages directly can be
privacy-invasive. It can require automated facial recognition, collection of live images,
or strongly identifying the children (e.g. via [[NIST-800-63A]]).
</aside>
<aside class="example" id="example-account-security" title="Account Security">
Accounts with a service need to be protected more strongly than just with a username and
password. Since a compromised account reveals all the private information stored in the
account, this is a privacy issue and not just a security issue. Services often store the
historical locations and machine characteristics that have accessed an account, in order
to make it harder to log in from an unusual place or type of machine. They might also
store other personal data, like a phone number or address, in order to check that a
suspicious login actually comes from the real account owner.
</aside>
# Principles for Privacy on the Web
As indicated above, different [=contexts=] require different [=principles=]. This section describes a set
of [=principles=] designed to apply to the Web [=context=] in general. The Web is a big place, and we
fully expect more specific [=contexts=] of the Web to add their own [=principles=] to further constrain
[=information flows=].
To the extent possible, [=user agents=] are expected to enforce these [=principles=]. However, this is not
always possible and additional enforcement mechanisms are needed. One particularly salient issue
is that a [=context=] is not defined in terms of who owns or controls it (it is not a [=party=]). Sharing
[=data=] between different [=contexts=] of a single company is just as much a [=privacy violation=] as
if the same data were shared between unrelated [=parties=].
<aside class="issue">
Please note that the initial focus of this draft was on <a href="#intro"></a>. This section is
incomplete and has uneven maturity. The absence of a specific principle should not be seen as an
indication that the Task Force considers it to be of lesser importance.
</aside>
## Identity on the Web {#identity}
<div class="practice">
<span class="practicelab" id="principle-identity-per-context">A [=user agent=]
should help its user present the [=identity=] they want in each [=context=]
they are in.</span>
</div>
A [=person=]'s <dfn>identity</dfn> is the set of characteristics that define
them. Their identity *in a [=context=]* is the set of characteristics they
present in that context. People frequently present different identities to
different contexts, and also frequently share an identity among several
contexts. People may also wish to present an ephemeral or anonymous identity,
which is just a set of characteristics that is too small or unstable to be useful
for following them through time.
<dfn data-lt="recognize|recognized">Recognition</dfn> is the act of realising that a given [=identity=]
corresponds to the same [=person=] as another [=identity=] which may have been
observed either in another [=context=] or in the same [=context=] but at a
different time.
In order to uphold this principle, sometimes a [=user agent=] needs to *prevent*
[=recognition=], for instance so that a [=site=] can't learn anything about its
[=user=]'s behavior on *another* site, and sometimes the [=user agent=] needs to *support* [=recognition=],
for instance to help its [=user=] *prove* to one [=site=] that they have a