Privacy Research Group

ILI Privacy Research Group Logo

The Privacy Research Group is a weekly meeting of students, professors, and industry professionals who are passionate about exploring, protecting, and understanding privacy in the digital age.

Joining PRG

Because we deal with early-stage work in progress, attendance at meetings of the Privacy Research Group is generally limited to researchers and students who can commit to ongoing participation in the group. To discuss joining the group, please contact Nicholas Tilmes. If you are interested in these topics, but cannot commit to ongoing participation in PRG, you may wish to join the PRG-All mailing list.
 
PRG Student Fellows—Student members of PRG have the opportunity to become Student Fellows. Student Fellows help bring the exciting developments and ideas of the Research Group to the outside world. The primary Student Fellow responsibility is to maintain an active web presence through the ILI student blog, reporting on current events and developments in the privacy field and bringing the world of privacy research to a broader audience. Fellows also have the opportunity to help promote and execute exciting events and colloquia, and even present to the Privacy Research Group. Student Fellow responsibilities are a manageable and enjoyable addition to the regular meeting attendance required of all PRG members. The Student Fellow position is the first step for NYU students into the world of privacy research. Interested students should email Student Fellow Coordinator Nicholas Tilmes with a brief (1-2 paragraph) statement of interest or for more information.


PRG Calendar

Fall 2024

November 20: Nathalie Smuha - Drafting, Thinking, Judging: What We Delegate to Generative AI and How it Can Threaten Democracy

    ABSTRACT: Our reliance on Generative AI, and LLMs in particular, is steadily growing, for an ever wider range of tasks. No matter how extensive or limited the tasks we delegate to generative AI, this delegation inevitably impacts how we express ourselves. After all, it is through language and art that we convey our thoughts, feelings and ideas. Yet what is the effect of such a delegation? This question can be answered from many perspectives. In this draft paper, I focus on one in particular: the human capacity to engage in public deliberation and democratic decision-making. Drawing on Hannah Arendt’s work, I claim this capacity hinges – amongst others – on a faculty that is unprecedently affected by the delegation to generative AI: the faculty of judgment. Many scholars examined Arendt’s conceptualization of judgment and its link to democratic politics. Recently, this examination expanded to the space of social media, as an illustration of how the (online) public realm morphed into a social (media) realm. Yet I believe the advent of generative AI has deepened the risk of the erosion of judgment in an unparalleled way. While analogies to the dissemination of propaganda, hate speech, and otherwise problematic content via ‘the internet’ retain validity, the centralization, creation and personalization through generative AI applications has brought this problem to a different level. I therefore argue that, without adequate safeguards, generative AI can pose a dual threat to democracy. First, an authoritarian-minded regime can purposely oblige such systems to be embedded with oppressive values and censor critical views. Second, more subliminally and therefore perhaps more dangerously, reliance on such systems can also erode our capacity to think critically and shape our (political) judgment based on a plurality of opinions. Rather than authoritarian regimes, the source of this second threat is our own human laziness and zeal for efficiency. Both are, however, detrimental for democracy.

November 13: Sky (Qin) Ma - Paternalistic Privacy

     ABSTRACT: This paper introduces the concept of “Paternalistic Privacy” to describe a distinctive model of privacy governance in China, centered on the complex power dynamics among individuals, digital platforms, and the state. This analysis is based on the distinction of two primary aspects of privacy protection: preventing state overreach and mitigating abuses by private entities, particularly tech platforms. In the past decades, privacy violations by platforms have become widespread, affecting the general population in the digital era. Existing research predominantly examines state surveillance in China, exposing significant problems in limiting government intrusions into personal privacy. However, far less attention has been given to how China handles platform-based privacy violations. This study addresses this gap by analyzing mechanisms for protecting individuals against platform abuse and examining the state’s regulatory role. The find of this paper suggests that China’s approach to privacy protection is relatively strong in regulating private entities, supported by proactive legislation and enforcement measures. Through a comparative legal perspective, this paper argues that China’s privacy model is not merely a top-down framework but is better described as “paternalistic information management.” In this model, the state assumes a supervisory role, requiring lower-level entities to report information upwards, facilitating informed decision-making and the coordination of public interests. Here, the primary objective of privacy protection is to uphold personal dignity and secure private life, rather than serving as a tool for resisting state intrusion. The “Paternalistic Privacy” framework thus redefines the state’s role as a legal supporter of individuals against platform infringements, providing a structural buffer rather than direct interference. By presenting this novel conceptual approach, the paper enhances the understanding of the dynamic interplay among individuals, platforms, and the state in China and offers a new theoretical perspective for the broader discourse on global privacy protection.  

November 6: Sevinj Novruzova - Broaden Compliance: What are Guardrails for the Financial Institutions in Ensuring Global Anti-Money Laundering and Combating Terrorism Financing (AML/CFT) in AI Use

     ABSTRACT: In an era where financial institutions (banks) are under increasing scrutiny to comply with Global Anti-Money Laundering and Combating Terrorism Financing (AML/CFT), leveraging advanced technologies like AI presents a significant opportunity. Akin there are risks involved with deploying AI solutions to production to enhance AML/CFT programs, driving compliance and efficiency in the financial sector. AML/CFT frameworks are foundational to maintaining the integrity of the financial sector. This kind of program encompasses a broad range of regulations aimed at ensuring financial institutions (banks) operate within the legal standards set by regulatory bodies. AML/CFT initiatives are vital for detecting and preventing financial crimes such as money laundering, terrorist financing, and fraud. These frameworks require continuous monitoring, reporting, and updating to address evolving threats and regulatory changes. Financial institutions (banks) must implement robust systems to identify suspicious activities, conduct thorough customer due diligence, and maintain detailed records. The integration of AI into these systems can enhance their effectiveness by providing real-time analysis, improving detection capabilities, and streamlining compliance workflows. Compliance with these regulations is crucial to avoid hefty fines, build and maintain the trust of stakeholders and safeguard the customers’ money and data. AI may assist banks in becoming more effective in implementing AML/CFT standards through solutions that enhance the understanding, assessment, and mitigation of risks; customer due diligence (know your customer) and monitoring; and communication among stakeholders. In this presentation, I would like to shed light on the following questions: (i) What is the guidance of U.S. governmental regulatory agencies and international standard-setting organizations on AI use for AML/CFT?;  (ii) How do these organizations ensure the balance between the technology innovation and efficiency of the AML/CFT framework?; (iii) Do the efforts of FATF as global AML/CFT standard setter keep abreast of innovative technologies and business models in the financial sector and ensure that global standards remain up-to-date and could enable “smart” financial sector regulation that both addresses risks and promotes responsible innovation in this area? Are we expecting the same efforts from U.S. regulators or legislators on AI use? (iv) What are the conclusions, observations, and recommendations for financial institutions in mastering the guardrails in AI use for AML/CFT purposes?

October 30: Seb Benthall - Agents, Autonomy, Intelligence, Persons, and Properties

     ABSTRACT: This is a very early stage project for which I am seeking general feedback. The project is about laws regulating AI, and how they map to AI's architecture and capabilities, now and in the future. Rather than make an argument at this stage, I would like to present three thought experiments based on future AI scenarios and the legal controversies that might arise. Please see the attached document for those scenarios. I'm looking for feedback as to whether these scenarios are (a) realistic, in terms of technical claims made, (b) interpreted correctly, from a legal perspective, (c) inspiring of any fresh ideas about AI law.

October 23: Aileen Nielsen and Yafit Lev-Aretz - Current Prospects for Automated Data Privacy Opt-Outs

     ABSTRACT: The growing complexity of digital interactions has amplified the need for automated privacy management tools, particularly as user-driven privacy control mechanisms, such as Global Privacy Controls (GPCs), gain legal recognition in the U.S. This study investigates the challenges and adoption rates of GPCs through a panel survey study of over 600 U.S. adults. Findings reveal that nearly 80% of participants successfully installed a GPC browser extension, with 74% continuing to use it after one week. We also investigate propensity to take up and retain use of GPC based on privacy attitudes, privacy knowledge, and demographic attributes, finding little influence of these categories on GPC uptake or use. These results highlight the potential of automated opt-out tools to enhance consumer privacy, while underscoring the need for broader accessibility and education efforts. (Full disclosure: this abstract was generated by ChatGPT based on the circulated draft and only lightly edited thereafter!)

October 15: Chinmayi Sharma - Interoperable Obscurity

     ABSTRACT: Data brokers are abuse enablers. By systematically sharing personal information online, brokers facilitate stalking, harassment, and intimate-partner violence. While perpetrators of interpersonal abuse bear significant responsibility for their acts, brokers are complicit in physical, psychological, financial, and reputational harms because they make it so easy for people to be surveilled. This, in a nutshell, is the sociotechnical phenomenon of brokered abuse—the role data brokers play in enabling and exacerbating interpersonal abuse. To date, responses to this phenomenon have failed to address one of its key features: people suffering abuse must separately beg hundreds of brokers to conceal their information across the internet, with little recourse if the companies rebut their efforts to regain some safety and privacy.

October 9: Arna Wömmel - Algorithmic Fairness: The Role of Human Bias

     ABSTRACT: Legal mandates requiring the exclusion of sensitive attributes in the training data of predictive algorithms used in consequential decision-making often conflict with the technical complexities of machine learning, where such interventions can lead to disparate impact. Much of this research focuses on the quantitative properties of these systems in isolation, without considering broader contextual factors. In this paper, I take a behavioral perspective on this issue. Using an online lab experiment, I examine how algorithmic fairness interventions affect discrimination when human decision-makers, supported by predictive algorithms in making consequential decisions, retain ultimate authority. Specifically, I measure how this is influenced by their prior beliefs about the protected groups. I find that these fairness interventions can exacerbate discrimination when decision-makers hold biases against the groups explicitly protected by the algorithm, even when the fairness-aware algorithm is accurate and informative. These individuals tend to override the algorithm more often and mistakenly perceive fairness-aware algorithms as less accurate. As a result, discrimination in the final decision outcomes is not reduced. These findings suggest that to effectively mitigate discrimination in human-machine decision-making, human bias must be addressed not only during the development and training of these tools but also throughout their deployment. More broadly, they may help explain why recent technical advances in reducing algorithmic bias have not yet translated into reduced discrimination in domains where these tools are widely employed.

October 2: Ari Waldman - Technology Expertise in Technology Policy

     ABSTRACT: Scholars, commentators, and policymakers often assume that technology expertise—how the technology actually works and perspectives from engineers—is necessary to formulate law and policy about technology. Commentators chide members of Congress and the Supreme Court for not knowing how the Internet works; policymakers have proposed money for new technical staff at the FTC, SEC, and other agencies regulating the algorithmic economy; scholars suggest that technical expertise from engineers is necessary to understand how algorithms work so they can be made transparent and accountable. This turn to engineering expertise is reflexive and operates as an underlying assumption of much law and technology scholarship. In this discussion, I would like to challenge the over-prioritization of technology expertise in technology policymaking. I will highlight the effects of privileging this kind of expertise over others. And I will ask: What are the conditions, if any, under which technology expertise is necessary or welcome in law and policymaking processes.

September 25: Michael Goodyear - Dignity and Deepfakes

     ABSTRACT: Today, we face a perilous technosocial combination: AI-generated deepfakes and the Internet. Believable and accessible, AI-generated deepfakes have already spread sex, lies, and false advertisements across the Internet by targeting everyone from President Biden to Taylor Swift to middle school students. Deepfakes inflict multifarious harms against their targets, stripping them of control over their own identities, harming their reputations, and causing them to feel shame. Public dissemination of deepfakes augments these harms to victims’ dignity. Yet due to the dual problem of a technology that captures one’s likeness and another that disseminates it, few legal remedies are viable. Dignity-based torts are ineffective at restricting online platforms due to the safe harbor of Section 230. Copyright and trademark claims can succeed against online platforms, but do not primarily address dignity, making them of limited utility. This Article reveals the right of publicity as a historically, conceptually, and doctrinally apt vehicle for addressing the worst excesses of deepfakes. Over a century ago, the right of publicity emerged in response to a similar troubling combination: the portable camera and mass media. With no legal remedy for the capture and dissemination of one’s likeness to friend and foe alike, the right of publicity sought to protect individuals’ dignitary and economic interests to curtail the worst harms of modern technology and society. Resolving a circuit split by recognizing the right of publicity as intellectual property outside of Section 230’s liability shield would allow the right of publicity to not only restrict the creation of deepfakes, but also oblige online platforms to adopt notice-and-takedown regimes to restrict their dissemination. Utilizing the right of publicity to counter deepfakes in this manner will simultaneously help restore the original dignitary goals of the right of publicity, which has increasingly become associated with intellectual property aimed only at redressing economic loss.

September 18: Aniket Kesari - Federal Open Data as an Artificial Intelligence Resource

     ABSTRACT: In the 2010s, the open government data movement—a confluence of government transparency and open source advocates—succeeded in making most federal data disclosed by default and free of restriction on downstream use. However, keen-eyed observers noted a “new ambiguity” in open government data policies. It was not clear if the appropriate focus of these policies was “government”—in the sense of accountability and transparency—or “data”—in the sense of downstream use and reuse of government datasets by public and private actors. Even as federal government data policies moved from Executive Branch prerogative to statutory mandate, that ambiguity remained unresolved. But the rise of artificial intelligence—and its accompanying demand for new data sources—tipped the scale in favor of “data.” This Article does not seek to resolve the ambiguity in the other direction or rebalance the scales. Instead, we articulate how open government principles have a role to play even when federal open data is viewed primarily as an asset or resource to build artificial intelligence systems. Further, that role may require refinements in how we think about the “open” part of open government data. In short, there are compelling reasons to condition certain reuses of federal open data on the disclosure of the use even if that would make the use of that data less “open” in some senses.

Spring 2024

April 17: Elettra Bietti - Data is Infrastructure

     ABSTRACT: In the context of the advent of generative AI and changing platform business strategies, data’s role as a currency cannot be overstated. The mass collection of data, its storage, use and reproduction in algorithmic training and processing are key to platform companies’ profits. This paper frames data as an infrastructural and contextual phenomenon. It argues that conceptualizing data as infrastructure prompts a redirection of data governance efforts, across privacy and antitrust law, toward greater contextual awareness. Data is what it does. Unlike oil or other resources and commodities to which it has been compared, data is not a static physical object that exists “out there” and that can be traded on a marketplace. Instead, perhaps more like water, it is a fluid, contextual, materially embedded phenomenon that acquires the multiplicity of functions which we project onto it. Data embeds the purposes, assumptions and rationales of those who produce, collect, use, share and monetize it. As noted by Jathan Sadowski, data in platform economy is used not only to surveil, profile and target people with content and ads, but also to optimize systems; to manage, control and discipline processes; to model probabilities; to build new products and to grow the value of existing assets. It follows that data is economically and socially relevant, and thus legally relevant, primarily as versatile infrastructure and not as a commodity. In the existing platform context, the most significant uses of data about humans are internal to large platform companies like Meta and Alphabet and fuel the accumulation and exchange of other resources such as curated and refined forms of engagement. Viewing data as infrastructure has the potential to redirect digital governance efforts across privacy, data protection law and antitrust.

April 10: Angela Zhang - The Promise and Perils of China's Regulation of Artificial Intelligence

     ABSTRACT: In recent years, China has emerged as a pioneer in formulating some of the world’s earliest and most comprehensive regulations concerning artificial intelligence (AI) services. Thus far, much attention has focused on the restrictive nature of these rules, raising concerns that they might constrain Chinese AI development. This article is the first to draw attention to the expressive powers of Chinese AI legislations, particularly its information and coordination functions, to enable the AI industry. Recent legislative measures, such as the interim measures to regulate generative AI and various local AI legislations, offer little protective value to the Chinese public. Instead, these laws have sent a strong pro-growth signal to the industry while attempting to coordinate various stakeholders to accelerate technological progress. China’s strategic lenient approach to regulation may therefore offer its AI firms a short-term competitive advantage over their European and U.S. counterparts. However, such leniency risks creating potential regulatory lags that could escalate into AI-induced accidents and even disasters. The dynamic complexity of China’s regulatory tactics thus underscores the urgent need for increased international dialogue and collaboration with the country to tackle the safety challenges in AI governance.

April 3: Sebastian Benthall - Complex Sociotechnical System Alignment

     ABSTRACT: A common refrain in the field of AI ethics today is that AI should be aligned with human values. Current research and practice attempts this worthy aim with such techniques as reward modeling and constitutional prompting. In this work, I draw on work from cognitive psychology, systems theory, and science and technology studies to critique the way this problem is normally framed. It reifies AI as an autonomous entity rather than considering how it is embedded in and dependent on a sociotechnical system. It also elides the differences between artificial and living systems. A more realistic look at human values as they are expressions of human social organization, including human law, invites a new approach to thinking about ethical and accountable sociotechnical system design using complex systems theory. Not only a philosophical critique, this suggests a novel frontier for AI research.

March 27: Katja Langenbucher - Financial Profiling

     ABSTRACT: This early-stage project explores financial profiling, understood as “automated processing of personal data with the aim of making a prediction about a person” that involves “financial resources or essential services such as housing, electricity, and telecommunication services”3. I describe profiling as a searching and a signaling device. To those who provide access to financial resources or services, profiling is a searching device. For those who seek access, it works as a signaling device. Against this background, I discuss the role of regulation from both perspectives. For the provider of financial resources and essential services, I submit that existing regulation of profiling is largely concerned with what has traditionally been understood as decision-makers. I point towards predatory pricing, personalized pricing as well as prudential and paternalistic statistical discrimination. I highlight limits to that approach and move on to propose a focus on profiling in line with the AI Act, the recent ECJ’s interpretation of the GDPR, and US regulation of credit reporting agencies. For those that seek resources and services, I focus on the role of regulation as enabling them to send an appropriate signal. This presupposes that they understand the relevant signal. Along those lines, I applaud the ECJ’s wide reading of the GDPR but reject the court’s restriction to situations where profiling is of “paramount importance” to a decision-maker.  

March 13: Thomas Streinz - With Pride and Without Prejudice: Constructing European Data Law around the GDPR

     ABSTRACT: The EU has recently enacted a flurry of new legislation in the digital domain, including the Data Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA), Data Act (DA), and most recently the Artificial Intelligence Act (AIA). These laws have different regulatory objectives and employ different regulatory approaches, yet they can all be conceptualized to some extent as “data laws”. They regulate to varying degrees what, when, where, how, and why data is to be accessed, shared, transferred. For this reason, these new data laws need to position themselves vis-à-vis the EU’s established data protection law, especially the General Data Protection Regulation (GDPR). The contested legislative process and the final legislative outcome reveals that all new European data law gravitates around GDPR. In other words, the EU’s regulatory strategy in the digital domain proceeds with pride for and without prejudice to GDPR: Scholarly criticism for the GDPR’s design and track record does apparently not penetrate the Brussels bubble as its political economy coalesces around its landmark data protection law. This presentation questions the viability of constructing European data law in this way as the new European data law is not actually “without prejudice” to GDPR. Overlaps and tensions between the various legislative acts will eventually have to get resolved. If the EU wants to achieve its regulatory objectives, it may have to re-calibrate the relationship between data protection law and other domains of data law.

March 6: Yafit Lev-Aretz and Aileen Nielsen - Understanding Privacy as a Public Health Priority

     ABSTRACT: Privacy harms have taken on the dimensions of a massive crisis. Traditional legal frameworks, which predominantly rely on notice and consent or tort theories, have proven inadequate in addressing privacy’s proliferating challenges. Scholars have proposed novel definitions of privacy to factor in the social elements of privacy and the externalities of privacy decisions. Scholars have also proposed alternatives to individualized privacy self-management to improve the conceptual rationalizations of privacy law. While inspiring and engaging, these promising scholarly efforts have so far failed to inspire meaningful policy changes or to articulate successful privacy advocacy litigation strategies. Recent litigation targeting social media companies, whose business models intrinsically implicate personal data, presents a compelling opportunity to advance novel legal theories for holding firms accountable for harmful data practices. Arguments conceptualizing excessive user data extraction as an unlawful public nuisance, as in Seattle School District No. 1 v. Meta Platforms, Snap Inc., TikTok, Alphabet, et al., offer a path forward to discipline market players through a litigation theory - public nuisance - that was previously employed for public health concerns, as with the opioid epidemic. Another novel litigation tactic is to attack social media harms by addiction in vulnerable populations, as shown by the case brought by dozens of states against Meta in Arizona v. Meta Platforms, again employing a public health theory and taking lessons from previous public health litigation, as with e-cigarettes and childhood obesity. In this paper, we call for a conscious paradigm shift in privacy scholarship and privacy law. We argue that the current state of privacy should be perceived as a public health concern, and possibly even a public health crisis. Reframing privacy as an essential component of public health, we argue, provides both rhetorical powers to drive reforms and practical guidance for privacy law and policymaking. We contextualize the public health framing of privacy harms by tracing parallels to major public health crises that also once focused on individual responsibility - tobacco use, obesity, and the opioid epidemic. In all three cases, early tort litigation faced obstacles as the harms were blamed on individual responsibility rather than industry practices, mirroring current legal conceptions of privacy harms. Additional parallels exist, like difficulties establishing regulatory authority, state and local legislation filling federal gaps, and corporate efforts to obscure research on how industry practices shape individual behaviors. Rather than solely seeking individual consent, a public health framework recognizes privacy's broader societal impacts and emphasizes preventative protections. This approach justifies regulatory interventions that balance individual rights against collective welfare. Public health governance provides tested methodologies for curbing behaviors, like unfettered data collection, that jeopardize community well-being. In reconceiving privacy through this established lens, policymakers can implement solutions tailored to current systemic threats, moving beyond atomized notions of harm toward much-needed collective safeguards. Finally, we close by defending the notion of understanding privacy as a matter of public health in particular rather than as a public good more generally.

February 28: Ari Waldman - Compromised Advocates: Civil Society and the Future of Privacy Law

     ABSTRACT: This Article tells the inside story of the American Data Privacy and Protection Act (ADPPA) and the role of privacy nonprofit organizations in crafting it. It presents original research in the form of discourse analyses of primary source documents and interviews with Congressional staff and advocates at five privacy nonprofit organizations identified by Congressional staff as critical to drafting bill. The Article situates ADPPA as a weak and ultimately ineffectual attempt at regulating the harms of data extractive capitalism at the federal level, demonstrates why ADPPA turned out the way it did, and why civil society organizations advocated for particular provisions and not others. It then asks and answers a question critical for the future of privacy law: Why would privacy advocates draft and advocate for a weak law? Scholars are used to answering questions like this by turning to sociological explanations about civil society’s organizational atrophy, its oligarchic tendencies, and its context, or to political science explanations about special interests or coalition advocacy. That conventional approach is persuasive, yet incomplete. It ignores the effects of the law. This Article argues that background law, the dynamics of policymaking, and proceduralist or legalistic conceptions of privacy channeled privacy advocacy toward milquetoast reform, contributing to a weak bill that would change little about the status quo. ADPPA may not have been signed into law, but it is critical lawyers and legal scholars learn its lessons now: It is the single best snapshot we have of where privacy law is and where it is going. To help us end the cycle of mistakes and middling reform that keeps our privacy unprotected, especially as rapidly advancing artificial intelligence tools drive expanding thirst for personal data, this Article goes behind the scenes to show how law weakens the democratic voice in policy and sustains data-extractive capitalism. In making this argument, the Article makes three contributions to sociolegal studies. Its original research pulls back the curtain on privacy civil society, a woefully understudied player in constructing privacy law, and challenges existing literature that sees privacy nonprofits as far more effective than they really are. It also widens the aperture for scholars trying to understand the role of law in creating social, economic, and institutional relations and the role of social groups and institutions in creating law. Finally, the Article contributes to our understanding of how legislation is drafted in today’s dysfunctional Congress. It concludes by looking forward, using the ADPPA case study to inform future fights to protect privacy in the information economy.

February 21: PRG Student Fellows - Executive Order on Safe, Secure, and Trustworthy Development and Use of AI

February 14: Stein & Florencia Marotta-Wurgler - Training 'Legal Thinking': An Automated Approach to Interpreting Privacy Policies

     ABSTRACT: Privacy policies govern firms’ collection, use, sharing, and security of personal information of consumers. These rich and complex legal documents include contractual promises related to the collection, use, sharing, and protection of personally identifiable information, as well as mandated disclosures dictated by data protection regimes such as the European Union’s GDPR and California’s CCPA. Privacy policies tend to be detailed, lengthy, and complex, making them difficult to understand and for regulators to police firm behavior. Our project joins recent efforts to classify the terms in privacy policies to help automate their analysis using machine learning. Machine learning relies on human-coded examples to train, adjust, and test the capabilities of artificial intelligence algorithms (AIs). Until very recently, AIs ability to process large, unstructured texts were limited. As a result, datasets designed for legal tech focused on short phrases and simple legal concepts.  Current AI technology, however, possesses an increased ability to process text, largely through the use of large language models (LLMs). To date, most applications of LLM-based legal tech have relied on untested AIs trained mostly on generic, non-legal datasets. The legal training data that exists is designed around the limitation of the previous generation of AIs, and focuses on the meaning of short sentences and individual clauses, not on entire documents or collections of documents. Our paper makes three contributions. First, we introduce an approach and toolset for labeling online contracts that generate datasets tailored for training and testing this new class of higher-capability AIs’ ability to process legal documents. Our coding labels encompass most terms commonly found in privacy policies and map directly to relevant legal benchmarks across the U.S. and the E.U. Second, we demonstrate how a dataset generated using our approach can be used to test and modify LLMs. We offer some preliminary results in the case of privacy policies, where we “tune” LLMs to label key aspects of privacy policies and automate our coding process. Third, we make our data and tools publicly available for others to use and extend.

February 7: Fabien Lechevalier & Marie Potel-Saville - Moving from Dark to Fair Patterns: Regulation & countermeasures for human-centered digital

     ABSTRACT: Dark patterns or deceptive patterns could be defined as techniques for deceiving or manipulating users through interfaces that have the substantial effect of subverting or altering a user's autonomy, decision-making or choice as part of its online activities. These techniques are, for example, used to lead users to share ever more personal data, to pay more for products or services, to prevent them from canceling subscriptions or to make it more difficult to exercise their rights, or even impossible. The context of use of these services generates decision-making based on System 1 (Kahneman) and heuristics, which is fast and inexpensive in terms of cognitive costs. Beyond the direct consequences visible on an individual scale, these techniques contribute to the reinforcement of generalized behavioral manipulation practices that question our collective relationship to the progress of techniques, when they are not used for humans’ best interests, and question our social contract in the digital age. The communication aims to provide an overview of the regulatory framework governing dark patterns, to identify its shortcomings, and to propose sustainable regulatory solutions that really take human cognitive limits into account.

January 31: Moritz Schramm - Platform Administrative Law: A Research Agenda

     ABSTRACT: Scholarship of online platforms is at a crossroads. Everyone agrees that platforms must be reformed. Many agree that platforms should respect certain guarantees known primarily from public law like transparency, accountability, and reason-giving. However, how to install public law-inspired structures like rights protection, review, accountability, deference, hierarchy and discretion, participation, etc. in hyper capitalist organizations remains a mystery. This article proposes a new conceptual and, by extension, normative framework to analyze and improve platform reform: Platform Administrative Law (PAL). Thinking about platform power through the lens of PAL serves two functions. On the one hand, PAL describes the bureaucratic reality of digital domination by actors like Meta, X, Amazon, or Alibaba. PAL clears the view on the mélange of normative material and its infrastructural consequences governing the power relationship between platform and individual. It allows us to take stock of the distinctive norms, institutions, and infrastructural set ups enabling and constraining platform power. In that sense, PAL originates – paradoxically – from private actors. On the other hand, PAL draws from ‘classic’ administrative law to offer normative guidance to incrementally infuse ‘good administration’ into platforms. Many challenges platforms face can be thought of as textbook examples of administrative law. Maintaining efficiency while paying attention to individual cases, acting proportionate despite resource constraints, acting in fundamental rights-sensitive fields, implementing external accountability feedback, maintaining coherence in rule-enforcement, etc. – all this is administrative law. Thereby, PAL describes the imperfect and fragmented administrative regimes of platforms and draws inspiration from ‘classic’ administrative law for platforms. Consequentially, PAL helps reestablishing the supremacy of legitimate rules over technicity and profit in the context of platforms.

January 24: Priyanka Nanayakkara - Will Challenges of Understanding Differential Privacy Prevent it from Becoming Policy?

     ABSTRACT: Differential privacy (DP) is a state-of-the-art approach to privacy-preserving data analysis. Since its invention in 2006, researchers and practitioners have investigated its promise for satisfying various legal requirements, such as those elaborated in Title 13, GDPR, and HIPAA. If DP is used to meet such requirements, a range of parties—including policymakers, data analysts, and the public—will increasingly be required to make decisions related to its deployment. However, DP is not only notoriously difficult to understand and reason about by non-DP experts, but prior evidence also suggests that challenges to understanding it may prevent buy-in necessary for DP to become policy. Whether these challenges can be successfully overcome in a way that results in broad trust remains to be seen. In this talk, I will discuss implications of three categories of challenges to understanding DP and offer recommendations for how relevant parties may potentially address these challenges to meet policy requirements

Fall 2023

November 29: Monika Leszczynska - Defining the Boundaries of Marketing Influence: Public Perception and Unfair Trade Practices in the Digital Era
November 15: Aileen Nielsen & Arna Woemmel - Ageism unrestrained: the perplexing lack of action to protect older adults in the digital world
November 8: Aniket Kesari - A Legal Framework for Explainable AI
November 1: Toussaint Nothias - The Idea of Digital Colonialism: An Intellectual History At the Intersection of Research and Digital Rights Advocacy
October 25: Sebastian Benthall - Regulatory CI: Adaptively Regulating Privacy as Contextual Integrity
October 18: Michal Shur-Ofry - Multiplicity as an AI Governance Practice
October 11: Michael Goodyear - Infringing Information Architectures
October 4: Yafit Lev-Aretz - Humanized Choice Architecture
September 27: Alexis Shore - Governing the screenshot feature: Fighting interpersonal breaches of privacy through law and policy
September 20: Moritz Schramm - How the European Union and Big Tech reshape Judicial Power
September 13: David Stein – Rethinking IP (and Competition) in the Age of Online Software
 

Spring 2023

April 19: Anne Bellon -  Seeing through the screen. Transparency as regulation in the digital economy
April 12: Gabriel Nicholas, Christopher Morton & Salome Viljeon - Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing
April 5: Amanda Parsons & Salome Viljeon - How Law Collides with Informational Capitalism
March 29: Cade Mallett - Judicial Review of Administrative Action Based on AI
March 22: Stein - Innovation Protection for Platform Competition
March 8: Aileen Nielsen & Yafit Lev-Aretz - Disclosure and Our Moral Calculus: Do Data Use Disclosures Change Data Subjects’ Sense of Culpability
March 1: Ari Ezra Waldman - Privacy Civil Society
February 22: Thomas Streinz - Contingencies of the Brussels Effect in the Digital Domain
February 15: Sebastian Benthall - New Computational Approaches to Information Policy Research
February 8:  Argyri Panezi, Leon Anidjar, and Nizan Geslevich Packing - The Metaverse Privacy Problem: If you built it, it will come
February 1: Aniket Kesari - The Consumer Review Fairness Act and the Reputational Sanctions Market
January 25: Michelle Shen - The Brussels Effect as a ‘New-School’ Regulation Globalizing Democracy: A Comparative Review of the CLOUD Act and the European-United States Data Privacy FrameworkAlgorithmic Turn

Fall 2022

November 30: Ira Rubenstein - Artificial Speech and the First Amendment: A Skeptical View
November 16: Michal Gal - Synthetic Data: Legal Implications of the Data-Generation Revolution
November 9: Ashit Srivastava - Default Protectionist Tracing Applications: Erosion of Cooperative Federalism
November 2: María Angel - Privacy's Algorithmic Turn
October 26: Mimee Xu - Netflix and Forget
October 19: Paul Friedl - Dis/similarities in the Design and Development of Legal and Algorithmic Normative Systems: the Case of Perspective API
October 12: Katja Langenbucher - Fair Lending in the Age of AI
October 5: Ari Waldman - Gender Data in the Automated State
September 28: Elettra Bietti - The Structure of Consumer Choice: Antitrust and Utilities' Convergence in Digital Platform Markets
September 21: Mark Verstraete - Adversarial Information Law
September 14: Aniket Kesari - Do Data Breach Notification Laws Work?


Spring 2022

April 27: Stefan Bechtold - Algorithmic Explanations in the Field
April 20: Molly de Blanc - Employing the Right to Repair to Address Consent Issues in Implanted Medical Devices

April 13: Sergio Alonso de Leon - IP law in the data economy: The problematic role of trade secrets and database rights for the emerging data access rights
April 6: Michelle Shen – Criminal Defense Strategy and Brokering Innovation in the Digital and Scientific Era: Justice for Whom?
March 30: Elettra Bietti – From Data to Attention Infrastructures: Regulating Extraction in the Attention Platform Economy
March 23: Aniket Kesari - A Computational Law & Economics Toolkit for Balancing Privacy and Fairness in Consumer Law
March 9: Gabriel Nicholas - Administering Social Data: Lessons for Social Media from Other Sectors
March 2: Jiaying Jiang - Central Bank Digital Currencies and Consumer Privacy Protection
February 23: Aileen Nielsen & Karel Kubicek - How Does Law Make Code? The Timing and Content of Open Source Responses to GDPR and CCPA

February 16: Stein - Unintended Consequences: How Data Protection Laws Leave our Data Less Protected
February 9: Stav Zeitouni - Propertization in Information Privacy
February 2: Ben Sundholm - AI in Clinical Practice: Reconceiving the Black-Box Problem
January 26: Mark Verstraete - Probing Personal Data

 

Fall 2021

December 1: Ira Rubinstein & Tomer Kenneth - Health Misinformation, Online Platforms, and Government Action
November 17: Aileen Nielsen - Can an algorithm be too accurate?
November 10: Thomas Streinz - Data Capitalism
November 3: Barbara Kayondo - A Governance Framework for Enhancing Patient’s Data Privacy Protection in Electronic Health Information Systems
October 27: Sebastian Benthal - Fiduciary Duties for Computational Systems
October 20: Jiang Jiaying -  Technology-Enabled Co-Regulation as a New Regulatory Approach to Blockchain Implementation
October 13: Aniket Kesari - Privacy Law Diffusion Across U.S. State Legislatures
October 6: Katja Langenbucher - The EU Proposal for an AI Act – tested on algorithmic credit scoring
September 29: Francesca Episcopo - PrEtEnD – PRivate EnforcemenT in the EcoNomy of Data
September 22: Ben Green - The Flaws of Policies Requiring Human Oversight of Government Algorithms
September 15: Ari Waldman - Misinformation Project in Need of Pithy
 

Spring 2021

April 16:Tomer Kenneth — Public Officials on Social Media
April 9: Thomas Streinz — The Flawed Dualism of Facebook's Oversight Board
April 2: Gabe Nicholas — Have Your Data and Eat it Too: Bridging the Gap between Data Sharing and Data Protection
March 26: Ira Rubinstein  — Voter Microtargeting and the Future of Democracy
March 19: Stav Zeitouni
March 12: Ngozi Nwanta
March 5: Aileen Nielsen
February 26: Tom McBrien
February 19: Ari Ezra Waldman
February 12: Albert Fox Cahn
February 5: Salome Viljoen & Seb Benthall — Data Market Discipline: From Financial Regulation to Data Governance
January 29: Mason Marks  — Biosupremacy: Data Protection, Antitrust, and Monopolistic Power Over Human Behavior
 

Fall 2020

December 4: Florencia Marotta-Wurgler & David Stein — Teaching Machines to Think Like Lawyers
November 20: Andrew Weiner
November 6: Mark Verstraete — Cybersecurity Spillovers
October 30: Ari Ezra Waldman — Privacy Law's Two Paths
October 23: Aileen Nielsen — Tech's Attention Problem
October 16: Caroline Alewaerts — UN Global Pulse
October 9: Salome Viljoen — Data as a Democratic Medium: From Individual to Relational Data Governance
October 2: Gabe Nicholas — Surveillance Delusion: Lessons from the Vietnam War
September 25: Angelina Fisher & Thomas Streinz — Confronting Data Inequality
September 18: Danny Huang — Watching loTs That Watch Us: Studying loT Security & Privacy at Scale
September 11: Seb Benthall — Accountable Context for Web Applications
   

Spring 2020

April 29: Aileen Nielsen — "Pricing" Privacy: Preliminary Evidence from Vignette Studies Inspired by Economic Anthropology
April 22: Ginny Kozemczak — Dignity, Freedom, and Digital Rights: Comparing American and European Approaches to Privacy
April 15: Privacy and COVID-19 Policies
April 8: Ira Rubinstein — Urban Privacy
April 1: Thomas Streinz — Data Governance in Trade Agreements: Non-territoriality of Data and Multi-Nationality of Corporations
March 25: Christopher Morten — The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs
March 4: Lilla Montanagni — Regulation 2018/1807 on the Free Flow of Non Personal Data: Yet Another Piece in the Data Puzzle in the EU?
February 26: Stein — Flow of Data Through Online Advertising Markets
February 19: Seb Benthall — Towards Agend-Based Computational Modeling of Informational Capitalism
February 12: Yafit Lev-Aretz & Madelyn Sanfilippo — One Size Does Not Fit All: Applying a Single Privacy Policy to (too) Many Contexts
February 5: Jake Goldenfein & Seb Benthall — Data Science and the Decline of Liberal Law and Ethics
January 29: Albert Fox Cahn — Reimagining the Fourth Amendment for the Mass Surveillance Age
January 22: Ido Sivan-Sevilia — Europeanization on Demand? The EU's Cybersecurity Certification Regime Between the Rationale of Market Integration and the Core Functions of the State

 

Fall 2019

December 4: Ari Waldman — Discussion on Proposed Privacy Bills
November 20: Margarita Boyarskaya & Solon Barocas [joint work with Hanna Wallach] — What is a Proxy and why is it a Problem?
November 13: Mark Verstraete & Tal Zarsky — Data Breach Distortions
November 6: Aaron Shapiro — Dynamic Exploits: Calculative Asymmetries in the On-Demand Economy
October 30: Tomer Kenneth — Who Can Move My Cheese? Other Legal Considerations About Smart-Devices
October 23: Yafit Lev-Aretz & Madelyn Sanfilippo — Privacy and Religious Views
October 16: Salome Viljoen — Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought
October 9: Katja Langenbucher — Responsible A.I. Credit Scoring
October 2: Michal Shur-Ofry — Robotic Collective Memory   
September 25: Mark Verstraete — Inseparable Uses in Property and Information Law
September 18: Gabe Nicholas & Michael Weinberg — Data, To Go: Privacy and Competition in Data Portability 
September 11: Ari Waldman — Privacy, Discourse, and Power


Spring 2019

April 24: Sheila Marie Cruz-Rodriguez — Contractual Approach to Privacy Protection in Urban Data Collection
April 17: Andrew Selbst — Negligence and AI's Human Users
April 10: Sun Ping — Beyond Security: What Kind of Data Protection Law Should China Make?
April 3: Moran Yemini — Missing in "State Action": Toward a Pluralist Conception of the First Amendment
March 27: Nick Vincent — Privacy and the Human Microbiome
March 13: Nick Mendez — Will You Be Seeing Me in Court? Risk of Future Harm, and Article III Standing After a Data Breach
March 6: Jake Goldenfein — Through the Handoff Lens: Are Autonomous Vehicles No-Win for Users
February 27: Cathy Dwyer — Applying the Contextual Integrity Framework to Cambride Analytica
February 20: Ignacio Cofone & Katherine Strandburg — Strategic Games and Algorithmic Transparency
February 13: Yan Shvartshnaider — Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
January 30: Sabine Gless — Predictive Policing: In Defense of 'True Positives'


Fall 2018

December 5: Discussion of current issues
November 28: Ashley Gorham — Algorithmic Interpellation
November 14: Mark Verstraete — Data Inalienabilities
November 7: Jonathan Mayer — Estimating Incidental Collection in Foreign Intelligence Surveillance
October 31: Sebastian Benthall — Trade, Trust, and Cyberwar
October 24: Yafit Lev-Aretz — Privacy and the Human Element
October 17: Julia Powles — AI: The Stories We Weave; The Questions We Leave
October 10: Andy Gersick — Can We Have Honesty, Civility, and Privacy Online? Implications from Evolutionary Theories of Animal and Human Communication
October 3: Eli Siems — The Case for a Disparate Impact Regime Covering All Machine-Learning Decisions
September 26: Ari Waldman — Privacy's False Promise
September 19: Marijn Sax — Targeting Your Health or Your Wallet? Health Apps and Manipulative Commercial Practices
September 12: Mason Marks — Algorithmic Disability Discrimination
 

Spring 2018

May 2: Ira Rubinstein Article 25 of the GDPR and Product Design: A Critical View [with Nathan Good and Guilermo Monge, Good Research]
April 25: Elana Zeide — The Future Human Futures Market
April 18: Taylor Black — Performing Performative Privacy: Applying Post-Structural Performance Theory for Issues of Surveillance Aesthetics
April 11: John Nay Natural Language Processing and Machine Learning for Law and Policy Texts
April 4: Sebastian Benthall — Games and Rules of Information Flow
March 28: Yann Shvartzshanider and Noah Apthorpe Discovering Smart Home IoT Privacy Norms using Contextual Integrity    
February 28: Thomas Streinz TPP’s Implications for Global Privacy and Data Protection Law

February 21: Ben Morris, Rebecca Sobel, and Nick Vincent — Direct-to-Consumer Sequencing Kits: Are Users Losing More Than They Gain?
February 14: Eli Siems — Trade Secrets in Criminal Proceedings: The Battle over Source Code Discovery
February 7: Madeline Bryd and Philip Simon Is Facebook Violating U.S. Discrimination Laws by Allowing Advertisers to Target Users?
January 31: Madelyn Sanfilippo Sociotechnical Polycentricity: Privacy in Nested Sociotechnical Networks 
January 24: Jason Schultz and Julia Powles Discussion about the NYC Algorithmic Accountability Bill


Fall 2017

November 29: Kathryn Morris and Eli Siems Discussion of Carpenter v. United States
November 15:Leon Yin Anatomy and Interpretability of Neural Networks
November 8: Ben Zevenbergen Contextual Integrity for Password Research Ethics?
November 1: Joe Bonneau An Overview of Smart Contracts
October 25: Sebastian Benthall Modeling Social Welfare Effects of Privacy Policies
October 18: Sue Glueck Future-Proofing the Law
October 11: John Nay — Algorithmic Decision-Making Explanations: A Taxonomy and Case Study
October 4:Finn Bruton — 'The Best Surveillance System we Could Imagine': Payment Networks and Digital Cash
September 27: Julia Powles Promises, Polarities & Capture: A Data and AI Case Study
September 20: Madelyn Rose Sanfilippo AND Yafit Lev-Aretz — Breaking News: How Push Notifications Alter the Fourth Estate
September 13: Ignacio Cofone — Anti-Discriminatory Privacy
 

Spring 2017

April 26: Ben Zevenbergen Contextual Integrity as a Framework for Internet Research Ethics
April 19: Beate Roessler Manipulation
April 12: Amanda Levendowski Conflict Modeling
April 5: Madelyn Sanfilippo Privacy as Commons: A Conceptual Overview and Case Study in Progress
March 29: Hugo Zylberberg Reframing the fake news debate: influence operations, targeting-and-convincing infrastructure and exploitation of personal data
March 22: Caroline Alewaerts, Eli Siems and Nate Tisa will lead discussion of three topics flagged during our current events roundups: smart toys, the recently leaked documents about CIA surveillance techniques, and the issues raised by the government’s attempt to obtain recordings from an Amazon Echo in a criminal trial. 
March 8: Ira Rubinstein Privacy Localism
March 1: Luise Papcke Project on (Collaborative) Filtering and Social Sorting
February 22: Yafit Lev-Aretz and Grace Ha (in collaboration with Katherine Strandburg) Privacy and Innovation     
February 15: Argyri Panezi Academic Institutions as Innovators but also Data Collectors - Ethical and Other Normative Considerations
February 8: Katherine Strandburg Decisionmaking, Machine Learning and the Value of Explanation
February 1: Argyro Karanasiou A Study into the Layers of Automated Decision Making: Emergent Normative and Legal Aspects of Deep Learning
January 25: Scott Skinner-Thompson Equal Protection Privacy
 

Fall 2016

December 7: Tobias Matzner The Subject of Privacy
November 30: Yafit Lev-Aretz Data Philanthropy
November 16: Helen Nissenbaum Must Privacy Give Way to Use Regulation?
November 9: Bilyana Petkova Domesticating the "Foreign" in Making Transatlantic Data Privacy Law
November 2: Scott Skinner-Thompson Recording as Heckling
October 26: Yan Shvartzhnaider Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
October 19: Madelyn Sanfilippo Privacy and Institutionalization in Data Science Scholarship
October 12: Paula Kift The Incredible Bulk: Metadata, Foreign Intelligence Collection, and the Limits of Domestic Surveillance Reform

October 5: Craig Konnoth Health Information Equity
September 28: Jessica Feldman the Amidst Project
September 21: Nathan Newman UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace
September 14: Kiel Brennan-Marquez Plausible Cause
 

Spring 2016

April 27: Yan Schvartzschnaider Privacy and loT AND Rebecca Weinstein - Net Neutrality's Impact on FCC Regulation of Privacy Practices
April 20: Joris van Hoboken Privacy in Service-Oriented Architectures: A New Paradigm? [with Seda Gurses]

April 13: Florencia Marotta-Wurgler Who's Afraid of the FTC? Enforcement Actions and the Content of Privacy Policies (with Daniel Svirsky)

April 6: Ira Rubinstein Big Data and Privacy: The State of Play

March 30: Clay Venetis Where is the Cost-Benefit Analysis in Federal Privacy Regulation?

March 23: Diasuke Igeta An Outline of Japanese Privacy Protection and its Problems
; Johannes Eichenhofer Internet Privacy as Trust Protection

March 9: Alex Lipton Standing for Consumer Privacy Harms

March 2: Scott Skinner-Thompson Pop Culture Wars: Marriage, Abortion, and the Screen to Creed Pipeline [with Professor Sylvia Law]

February 24: Daniel Susser Against the Collection/Use Distinction

February 17: Eliana Pfeffer Data Chill: A First Amendment Hangover

February 10: Yafit Lev-Aretz Data Philanthropy

February 3: Kiel Brennan-Marquez Feedback Loops: A Theory of Big Data Culture

January 27: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race
 

Fall 2015

December 2: Leonid Grinberg But Who BLocks the Blockers? The Technical Side of the Ad-Blocking Arms Race AND Kiel Brennan-Marquez - Spokeo and the Future of Privacy Harms
November 18: Angèle Christin - Algorithms, Expertise, and Discretion: Comparing Journalism and Criminal Justice
November 11: Joris van Hoboken Privacy, Data Sovereignty and Crypto
November 4: Solon Barocas and Karen Levy Understanding Privacy as a Means of Economic Redistribution
October 28: Finn Brunton Of Fembots and Men: Privacy Insights from the Ashley Madison Hack

October 21: Paula Kift Human Dignity and Bare Life - Privacy and Surveillance of Refugees at the Borders of Europe
October 14: Yafit Lev-Aretz and co-author, Nizan Geslevich Packin Between Loans and Friends: On Soical Credit and the Right to be Unpopular
October 7: Daniel Susser What's the Point of Notice?
September 30: Helen Nissenbaum and Kirsten Martin Confounding Variables Confounding Measures of Privacy
September 23: Jos Berens and Emmanuel Letouzé Group Privacy in a Digital Era
September 16: Scott Skinner-Thompson Performative Privacy

September 9: Kiel Brennan-Marquez Vigilantes and Good Samaritan
 

Spring 2015

April 29: Sofia Grafanaki Autonomy Challenges in the Age of Big Data; David Krone Compliance, Privacy and Cyber Security Information Sharing; Edwin Mok Trial and Error: The Privacy Dimensions of Clinical Trial Data Sharing; Dan Rudofsky Modern State Action Doctrine in the Age of Big Data

April 22: Helen Nissenbaum Respect for Context' as a Benchmark for Privacy: What it is and Isn't
April 15: Joris van Hoboken From Collection to Use Regulation? A Comparative Perspective
April 8: Bilyana Petkova
 Privacy and Federated Law-Making in the EU and the US: Defying the Status Quo?
April 1: Paula Kift — Metadata: An Ontological and Normative Analysis

March 25: Alex Lipton — Privacy Protections for the Secondary User of Consumer-Watching Technologies

March 11: Rebecca Weinstein (Cancelled)
March 4: Karen Levy & Alice Marwick — Unequal Harms: Socioeconomic Status, Race, and Gender in Privacy Research


February 25 : Luke Stark — NannyScam: The Normalization of Consumer-as-Surveillorm


February 18: Brian Choi A Prospect Theory of Privacy

February 11: Aimee Thomson — Cellular Dragnet: Active Cell Site Simulators and the Fourth Amendment

February 4: Ira Rubinstein — Anonymity and Risk

January 28: Scott Skinner-Thomson Outing Privacy

 

Fall 2014

December 3: Katherine Strandburg — Discussion of Privacy News [which can include recent court decisions, new technologies or significant industry practices]

November 19: Alice Marwick — Scandal or Sex Crime? Ethical and Privacy Implications of the Celebrity Nude Photo Leaks

November 12: Elana Zeide — Student Data and Educational Ideals: examining the current student privacy landscape and how emerging information practice and reforms implicate long-standing social and legal traditions surrounding education in America. The Proverbial Permanent Record [PDF]

November 5: Seda Guerses — Let's first get things done! On division of labor and practices of delegation in times of mediated politics and politicized technologies
October 29:Luke Stark — Discussion on whether “notice” can continue to play a viable role in protecting privacy in mediated communications and transactions given the increasing complexity of the data ecology and economy.
Kristen Martin — Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online

Ryan Calo — Against Notice Skepticism in Privacy (and Elsewhere)

Lorrie Faith Cranor — Necessary but Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice
October 22: Matthew Callahan — Warrant Canaries and Law Enforcement Responses
October 15: Karen Levy — Networked Resistance to Electronic Surveillance
October 8: Joris van Hoboken —  The Right to be Forgotten Judgement in Europe: Taking Stock and Looking Ahead

October 1: Giancarlo Lee — Automatic Anonymization of Medical Documents
September 24: Christopher Sprigman — MSFT "Extraterritorial Warrants" Issue 

September 17: Sebastian Zimmeck — Privee: An Architecture for Automatically Analyzing Web Privacy Policies [with Steven M. Bellovin]
September 10: Organizational meeting
 

Spring 2014

April 30: Seda Guerses — Privacy is Security is a prerequisite for Privacy is not Security is a delegation relationship
April 23: Milbank Tweed Forum Speaker — Brad Smith: The Future of Privacy
April 16: Solon Barocas — How Data Mining Discriminates - a collaborative project with Andrew Selbst, 2012-13 ILI Fellow
March 12: Scott Bulua & Amanda Levendowski — Challenges in Combatting Revenge Porn


March 5: Claudia Diaz — In PETs we trust: tensions between Privacy Enhancing Technologies and information privacy law: The presentation is drawn from a paper, "Hero or Villain: The Data Controller in Privacy Law and Technologies” with Seda Guerses and Omer Tene.

February 26: Doc Searls Privacy and Business

February 19: Report from the Obfuscation Symposium, including brief tool demos and individual impressions

February 12: Ira Rubinstein The Ethics of Cryptanalysis — Code Breaking, Exploitation, Subversion and Hacking
February 5: Felix Wu — The Commercial Difference which grows out of a piece just published in the Chicago Forum called The Constitutionality of Consumer Privacy Regulation

January 29: Organizational meeting
 

Fall 2013

December 4: Akiva Miller — Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy? & Malte Ziewitz What does transparency conceal?
November 20: Nathan Newman — Can Government Mandate Union Access to Employer Property? On Corporate Control of Information Flows in the Workplace

November 6: Karen Levy — Beating the Box: Digital Enforcement and Resistance
October 23: Brian Choi — The Third-Party Doctrine and the Required-Records Doctrine: Informational Reciprocals, Asymmetries, and Tributaries
October 16: Seda Güerses — Privacy is Don't Ask, Confidentiality is Don't Tell
October 9: Katherine Strandburg — Freedom of Association Constraints on Metadata Surveillance
October 2: Joris van Hoboken — A Right to be Forgotten
September 25: Luke Stark — The Emotional Context of Information Privacy
September 18: Discussion — NSA/Pew Survey
September 11: Organizational Meeting


Spring 2013

May 1: Akiva Miller — What Do We Worry About When We Worry About Price Discrimination
April 24: Hannah Block-Wheba and Matt Zimmerman — National Security Letters [NSL's]

April 17: Heather Patterson — Contextual Expectations of Privacy in User-Generated Mobile Health Data: The Fitbit Story
April 10: Katherine Strandburg — ECPA Reform; Catherine Crump: Cotterman Case; Paula Helm: Anonymity in AA

April 3: Ira Rubinstein — Voter Privacy: A Modest Proposal
March 27: Privacy News Hot Topics — US v. Cotterman, Drones' Hearings, Google Settlement, Employee Health Information Vulnerabilities, and a Report from Differential Privacy Day

March 6: Mariana Thibes — Privacy at Stake, Challenging Issues in the Brazillian Context
March 13: Nathan Newman — The Economics of Information in Behavioral Advertising Markets
February 27: Katherine Strandburg — Free Fall: The Online Market's Consumer Preference Disconnect
February 20: Brad Smith — Privacy at Microsoft
February 13: Joe Bonneau  — What will it mean for privacy as user authentication moves beyond passwo
February 6: Helen Nissenbaum — The (Privacy) Trouble with MOOCs
January 30: Welcome meeting and discussion on current privacy news
 

Fall 2012

December 5: Martin French — Preparing for the Zombie Apocalypse: The Privacy Implications of (Contemporary Developments in) Public Health Intelligence
November 7: Sophie Hood — New Media Technology and the Courts: Judicial Videoconferencing
November 14: Travis Hall — Cracks in the Foundation: India's Biometrics Programs and the Power of the Exception

November 28: Scott Bulua and Catherine Crump — A framework for understanding and regulating domestic drone surveillance

November 21: Lital Helman — Corporate Responsibility of Social Networking Platforms
October 24: Matt Tierney and Ian Spiro — Cryptogram: Photo Privacy in Social Media
October 17: Frederik Zuiderveen Borgesius — Behavioural Targeting. How to regulate?

October 10: Discussion of 'Model Law'

October 3: Agatha Cole — The Role of IP address Data in Counter-Terrorism Operations & Criminal Law Enforcement Investigations: Looking towards the European framework as a model for U.S. Data Retention Policy
September 26: Karen Levy — Privacy, Professionalism, and Techno-Legal Regulation of U.S. Truckers
September 19: Nathan Newman — Cost of Lost Privacy: Google, Antitrust and Control of User Data