Taking notes. Person writing in notebook with pen. Laptop & phone visible. AI notetaker alternative. Should your AI notetaker be in the room?

It seems like everyone is using an AI notetaker these days. They’re a way for users to stay more present in meetings, keep better track of commitments and action items, and perform much better than most people’s memories. On the surface, they look like a simple example of AI living up the hype of improved efficiency and performance. 

As an AI ethicist, I’ve watched more people I have meetings use AI notetakers, and it’s increasingly filled me with unease: it’s invited to more and more meetings (including when the user doesn’t actually attend) and I rarely encounter someone who has explicitly asked for my consent to use the tool to take notes.  

However, as a busy executive with days full of context switching across a dizzying array of topics, I felt a lot of FOMO at the potential advantages of taking a load off so I could focus on higher-value tasks. It’s clear why people see utility in these tools, especially in an age where many of us are cognitively overloaded and spread too thin. But in our rush to offload some work, we don’t always stop to consider the best way to do it. 

When a “low risk” use case isn’t 

It might be easy to think that using AI for something as simple as taking notes isn’t ethically challenging. And if the argument is that it should be ethically low stakes, you’re probably right. But the reality is much different.  

Taking notes with technology tangles the complex topics of consent, agency, and privacy. Because taking notes with AI requires recording, transcribing, and interpreting someone’s ideas, these issues come to the fore. To use these technologies ethically, everyone in each meeting should:  

  • Know that that they are being recorded 
  • Understand how their data will be used, stored, and/or transferred 
  • Have full confidence that opting out is acceptable. 

The reality is that this shouldn’t be hard – but the economics of selling AI notetaking tools means that achieving these objectives isn’t as straightforward as download, open, record. This doesn’t mean that these tools can’t be used ethically, but it does mean that in order to do so we have to use them with intention. 

What questions to ask:

What models are being used? 

Not all AI is built the same, in terms of both technical performance and the safety practices that surround them. Most tools on the market use foundation models from frontier AI labs like Anthropic and OpenAI (which make Claude and ChatGPT, respectively), but some companies train and deploy their own custom models. These companies vary widely in the rigour of their safety practices. You can get a deeper understanding of how a given company or model approaches safety by seeking out the model cards for a given tool.  

The particular risk you’re taking will depend on a combination of your use case and the safeguards put in place by the developer and deployer. For example, there’s significantly more risk of using these tools in conversations where sensitive or protected data is shared, and that risk is amplified by using tools that have weak or non-existent safety practices. Put simply, it’s a higher ethical risk (and potentially illegal) decision to use this technology when you’re dealing with sensitive or confidential information.  

Does the tool train on user data? 

AI “learns” by ingesting and identifying patterns in large amounts of data, and improves its performance over time by making this a continuous process. Companies have an economic incentive to train using your data – it’s a valuable resource they don’t have to pay for. But sharing your data with any provider exposes you and others to potential privacy violations and data leakages, and ultimately it means you lose control of your data. For example, research has shown that there are techniques that cause large language models (LLMs) to reproduce their training data, and AI creates other unique security vulnerabilities for which there aren’t easy solutions. 

For most tools, the default setting is to train on user data. Often, tools will position this approach in terms of generosity, in that providing your data helps improve the service for yourself and others. While users who prioritise sharing over security may choose to keep the default, users that place a higher premium on data security should find this setting and turn it off. Whatever you choose, it’s critical to disclose this choice to those you’re recording. 

How and where is the data stored and protected? 

The process of transcribing and translating can happen on a local machine or in the “cloud” (which is really just a machine somewhere else connected to the internet). The majority will use a third-party cloud service provider, which expands the potential ethical risk surface.  

First, does the tool run on infrastructure associated with a company you’re avoiding? For example, many people specifically avoid spending money on Amazon due to concerns about the ethics of their business operations. If this applies to you, you might consider prioritising tools that run locally, or on a provider that better aligns with your values.  

Second, what security protocols does the tool provider have in place? Ideally, you’ll want to see that a company has standard certifications such as SOC 2, ISO 27001 and/or ISO 42001, which show an operational commitment to security, privacy, and safety. 

Whatever you choose, this information should be a part of your disclosure to meeting attendees.  

How am I achieving fully informed consent? 

The gold standard for achieving fully informed consent is making the request explicit and opt in as a default. While first-generation notetakers were often included as an “attendee” in meetings, newer tools on the market often provide no way for everyone in the meeting to know that they’re being recorded. If the tool you use isn’t clearly visible or apparent to attendees, the ethical burden of both disclosure and consent gathering falls on you.  

This issue isn’t just an ethical one – it’s often a legal one. Depending on where you and attendees are, you might need a persistent record that you’ve gotten affirmative consent to create even a temporary recording. For me, that means I start meeting with the following:  

I wanted to let you know that I like to use an AI notetaker during meetings. Our data  won’t be used for training, and the tool I use relies on OpenAI and Amazon Web Services. This helps me stay more present, but it’s absolutely fine if you’re not comfortable with this, in which case I’ll take notes by hand. 

Doing this might feel a bit awkward or uncomfortable at first, but it’s the first step not only in acting ethically, but modelling that behaviour for others.  

Where I landed 

Ultimately, I decided that using an AI notetaker in specific circumstances was worth the risk involved for the work I do, but I set some guardrails for myself. I don’t use it for sensitive conversations (especially those involving emotional experiences) or those where confidential data is shared. I start conversations with my disclosure, and offer to share a copy of the notes for both transparency and accuracy.  

But perhaps the broader lesson is that I can’t outsource ethics: the incentive structures of the companies producing these tools aren’t often aligned to the values I choose to operate with. But I believe that by normalising these practices, we can take advantage of the benefits of this transformative technology while managing the risks. 

 

AI was used to review research for this piece and served as a constructive initial editor.  

copy license