Any presidential plan to ‘take on tech’ in 2024 must address power and consent

The New York Cyber Abuse Task Force is a coalition of legal and non-legal professionals, survivors, and technology workers to fight tech-facilitated gender-based violence in all its forms. Our findings consistently show that the most dangerous and persistent forms of tech abuse occur in the context of intimate partner violence and image-based abuse.


The New York Cyber Abuse Task Force offers a framework for humane technology product design

The first 2024 presidential debates start this week (I know, I’m sorry) and one question is certain: what is each candidate’s plan to take on tech?

In the past, when someone asked me what I thought of the impact of tech or AI on society, it was like being asked “what do you think of … hammers?” I, like many working in tech, found something like artificial intelligence to simply be a tool to augment human’s brain power rather than physical power. How it gets deployed depends entirely on a human’s will power. For example, hammers are great for assembling a new crib, but we’d never give a hammer to a baby. Construction workers can use hammers to build homes, but not to threaten their spouses. We expect industries and governments to apply the same common sense discretion to technology and intelligence tools. Especially when those tools have the speed and scale beyond any single swing of a hammer.

Now, thirty years after the internet’s mainstream debut, our hunger for that discretion crescendos. And with good reason, considering the explosion of child pornography on the web (and how AI contributes to it); the flourishing of hate groups online; the biases created by the lack of diversity in algorithms’ data sets; the harm of social media features to adolescents; the proliferation of image-based sexual abuse and deep-fake porn; the digital safety risks of financial technologies for survivors of intimate partner violence; and the erosion of privacy in technology.

Yet no branch of the U.S. government seems poised to make those discretionary calls any time soon. Congress has hope, but no real plan (maybe a plan for a plan?). The Supreme Court’s recent decisions in Counterman v. Colorado and Gonzalez v. YouTube both signal a lack of willingness to apply real-world discretion to the digital world. And whatever soaring rhetoric our presidential candidates offer this election cycle, we know that campaign promises don’t always translate to executable plans. So without a government protecting its people, we wait for the industry to self-regulate (ex: New Zealand), but how much longer should we wait?

A coalition of legal and non-legal professionals, survivors, and technology workers came together, as the New York Cyber Abuse Task Force, to fight technology-facilitated gender-based violence in all its forms… because we couldn’t wait. Technology accelerates the speed and magnifies the scale of existing illegal sexually-abusive behavior such as stalking, spying, spoofing, impersonation, sextortion and non-consensual distribution of sexually explicit images and videos beyond imagination. Lawyers and law enforcement support the survivors of these crimes who are forced to retreat from a digital life – or life altogether. We published the 220 page NY Cyber Abuse Task Force “Manual for Advocates” to help lawyers navigate the courts for the issues that plague abuse victims, but we still need more. We need the engineers, designers, product managers and analysts – the ones with the technical know-how and a desire to use technology for building a better future – to change the industry from within.

Now, if I’m working in big tech, as I did for twenty years, I hear that call, but question what I can realistically do from my perch. I’m juggling too many projects already and wondering when the next round of layoffs will hit. However, the answer is less burdensome than one may think. Because when you build for the users most affected by the harms of technology, the future of all your users will be brighter. All those projects you’re juggling will benefit because a rising tide really does lift all boats.

Reviewing the aforementioned litany of tech-accelerated abuses, five overlapping groups emerge as the most vulnerable to the harms of technology: children, women, members of the LGBTQIA+ community, people of color and low-income individuals. When product teams consider these groups as primary users rather than edge-cases, all users benefit. Point in case: the recent updates to air tag tracking devices, which help domestic violence survivors avoid being stalked by their abusers … but also help all users better protect their privacy. Our task force has found that ‘humane’ product design boils down to a product team being able to answer ‘NO’ to the following three questions:

  1. Could your product let someone exert P.O.W.E.R. over another person?
  2. Is that power exerted without the subject’s C.O.N.S.E.N.T.?
  3. Does anything prevent your company from providing law enforcement the right E.V.I.D.E.N.C.E. needed to hold an abuser accountable?

The three acronyms of P.O.W.E.R., C.O.N.S.E.N.T. and E.V.I.D.E.N.C.E. address twenty factors of online abuse that can help protect survivors long before the lawyers are involved.

Whether you’re writing the PRD, presenting wireframes or mocks, architecting the database, writing the technical design doc, determining and implementing API calls, or performing a security or privacy review etc, you can ask these questions below to determine how humane your product design is for vulnerable populations (download one-sheet here):

1st Question: Could your product let someone exert P.O.W.E.R. over another person?

P: Does the success of your product depend on the propagation of content your company did not create?

O: Do you treat behavior in your product differently than if it happened offline? Is there a chance the product could reflect activity online differently than it is occurring offline?

W: Could anyone who is watching, changing settings or asking for help on the user’s account differ from the user herself in your product?

E: Does your product entangle the user’s account to either another account belonging to the same person or another person all together?

R: Could your product be used by one person to malign the reputation of another person?

If YES to any of the above, move to the second question.

If NO to all of the above, first check if others on your team would answer the same way. If they all say NO, great – move to the third question.


2nd Question: Is that power exerted without the subject’s C.O.N.S.E.N.T.?

C: How does a user determine if their account or device has been compromised in real-time in your product?

O: Is the default feature setting opt-in or opt-out in your product? Is the user aware?

N: How does the user negotiate their presence in your product? Does she have an opportunity to understand how her data will be used and where it’s going?

S: How does the user screenshot, store, save and send proof of unconsented-to activity to authorities in your product? Conversely, can a user prevent someone else from screenshotting/saving material intended for them to receive, but not to be distributed to others?

E: How do you monitor points of egress (and ingress, for that matter) for anomalies in your product?

N: If you use social features, how does a user notify her network that she’s cut ties with someone, and warn of potential impersonation in your product? Conversely, is there a way for the user to know if someone reaching out to her is tied to the network of the person who she blocked?

T: Can a user report harm in-product in a timely fashion? Do you respond to the subject’s reports of harassment in a timely manner? Are your revocations of access and removals of content timely? What does it take to trigger your break-glass plan in a timely manner? (ex)

If you don’t have an answer for one or more of these sub-questions, consider whether your product’s features deny options of privacy and consent.


3rd Question: Does anything prevent your company from providing law enforcement the right E.V.I.D.E.N.C.E. needed to hold an abuser accountable?

E: What is your team’s plan for how to work with law enforcement to decrypt encrypted messages?

V: How do you provide verification of identity and data authenticity as required in court? Can you connect the actor to the activity?

I: How does your product back-end integrate with other databases of evidence from previous cyber abuse violations / bad actors to learn and detect future abuse?

D: Enumerate the data logs of user activity with protocols for when that data should be stored, for how long, etc.

E: How do you ensure that your data logs have plain descriptions of each field to explain the meaning of each field, its metadata values and what could be a sign of manipulation?

N: How do current legal protections cover the next iteration of your product or feature?

C: How does your product understand when crimes and confessions are being aired or live streamed?

E: How quickly can the above data be exported to law enforcement in a human readable format in a timely fashion?

If you don’t have an answer for one or more of these sub-questions, consider whether your product is ready to release if you can’t hold abusers of your product accountable.


A handful of examples to get your minds going:

P.O.W.E.R. means possessing control, authority, or influence over someone or something. Could your product let someone exert P.O.W.E.R. over another person? Consider if that person is a historically underrepresented minority and/or a child.

Propagation – Does the success of your product depend on the propagation of content your company did not create? (Propagation is the action of widely spreading and promoting an idea, theory, etc.) Another way of asking this is “Can you hit your GEMS metrics without propagating user-generated content?” For those not in tech, GEMS stands for growth, engagement, monetization, and satisfaction (ex: information satisfaction, regulatory compliance, etc). So can you grow your number of users, lengthen the time they spend in/on a product, increase the revenue you make on them or satisfy their needs without having to propagate content your company didn’t create? If not, you’re inherently incentivized to build a product that encourages fast, frictionless propagation of content – which is at odds with the measures like a cooling period normally associated with rational, deliberate decisions (much less give your product team time to determine if the content adheres to your product policies). For example, social media companies rely on a robust content ecosystem to have users scroll infinitely through a feed. Search engines need content to fulfill user queries (just look up a query like [track my girlfriend] or [stalkerware]). App stores rely on apps like Dream Zone to satisfy some men with ads that gamify rape. Maybe this kind of issue isn’t inherently at odds with your goals, but it’s a consideration.

Offline Parity – Do you treat behavior in your product differently than if it happened offline? Is there a chance the product could reflect activity online differently than it is occuring offline? Showing your genitals to someone without consent is sexual violence. Sending a dick pic to someone should be treated the same.

E.V.I.D.E.N.C.E.

Verification of identity and data authenticity: Can you provide verification of identity and data authenticity as required in court? Can you connect the actor to the activity? Directly from a prosecutor: “At trial, the main hurdle is often proving that a specific perpetrator sent a specific transmission. Offenders tend to use new devices and public Wi-Fi when distributing the photos/videos. Services exist to mask IP addresses. Some may also use throwaway devices and/or a virtual private network (VPN) to make it seem as if the distribution originated from China or Russia. Getting logs and connection data from a foreign VPN provider (if the logs even exist) is difficult and tedious. Defendants will commonly argue that they themselves were hacked. A well-organized evidence chart can be used to show that only that perpetrator would have the motive and ability to create the campaign of cyber sexual abuse your client endured … but that is usually directly at odds with the internal privacy mandate of a company.”

How do current legal protections cover the next iteration of your product or feature? Here’s a common clause used in temporary restraining orders and orders of protection: The Respondent is not to post, transmit, or maintain, or cause a third party to post, transmit, or maintain, any images, pictures, or other media, depicting the Petitioner in a naked state or participating in any sexual act OR threaten to do the same. The Respondent is to refrain from using Petitioner’s likeness or impersonating Petitioner on any social media. If your product counsel can’t fit your feature into that language, how will you communicate to lawyers and legislators that legal protections need to be updated?


You get it.

The members of our task force, and the technology industry professionals who partnered with us to create this framework, all realize that some of these questions may be addressed at varying seniority levels. We know this framework may be more useful for some types of technologies than others. And in some countries with more authoritarian governments, turning over evidence to law enforcement may require different considerations than this framework considers. But good, iterative product design does not permit perfection to be the enemy of progress – so let’s get started. Instead of doom-scrolling through your social media feeds while watching the presidential debates this week, listen to the candidates’ plans to see if they come close to addressing the groups most vulnerable to tech-enabled abuse. And then maybe take a moment to ask yourself these questions of power and consent in the products you build.

What will be your plan to build more humane technology products in 2024?


Tanuja Jain Gupta is former senior engineering program manager of twenty years, with eleven of those years at Google. During this time, she also advocated for workers’ rights in the form of leading a global walkout against sexual harassment in 2018 and successfully lobbying for Google to end its policy of forced arbitration in March 2019. Gupta was a key advocate for HR 4445, which became law in March of 2022, bringing together survivors of sexual harassment around the country to end forced arbitration at the federal level. For this work, she received the 2019 American Association for Justice Steven J. Sharp Public Service Award. While managing a large team at Google and working on some of its highest profile engineering and regulatory initiatives, Tanuja built a diversity, equity and inclusion program that was replicated by several teams within the company. During this time, Gupta also chaired the Board of the Crime Victims Treatment Center from 2017-2023. Gupta is now a rising 2L at Cardozo Law School, advocating for caste equity and tech reforms. She joined the NY Cyber Abuse Task Force to channel her tech expertise for the benefit of survivors, and hopes her former colleagues in the industry will do the same. Deep thanks to the multiple engineers and trust & safety analysts who contributed to the near year-long development of this framework.