It’s great to see people talking about responsibility and AI in the same sentence. But the phrase “responsible AI” is really starting to bother me. It implies that responsibility is somehow a feature of machine learning models. It shifts our attention away from activities where humans should be the ones to take responsibility: responsible model training, responsible fine-tuning, responsible human feedback practices, responsible deployment of models, etc. Instead, we shove all this human activity into the responsible AI bag and pretend it’s a bag somebody left unattended at an airport. We don’t know how it got here, who it belongs to, but hey, maybe we can convince it not to blow up!

Interestingly, we don’t talk about “responsible electricity”. We talk about responsible electricity (or energy) use in households. We educate people on how to safely handle electronic devices, how to reduce their energy usage. But when it comes to AI, we somehow assume somebody will figure out how to convince these black-box models to act responsibly so that we won’t have to think about it anymore. That’s a lot of responsibility for a statistical model to bear; those poor hidden layers must be weighted down by anxiety!

Nerdy jokes aside, the European Union is at least taking concrete steps with the EU AI Act to define the responsibility of providers of foundation models such as ChatGPT. But if we look at OpenAI’s Safety & Responsibility page, the title reads: “Developing safe & responsible AI”. Shouldn’t it be “Responsibly developing safe AI that benefits humanity”? Interestingly enough, the OpenAI Charter – which describes the principles they use in pursuit of their mission – doesn’t mention responsibility at all. This makes me wonder whether they are well aware that “responsible AI” is a useful PR gimmick.

I think most organizations developing foundation models or large-scale AI products are smart enough to realize that the term is a useful decoy. It lends itself nicely to media and non-experts anthropomorphizing AI systems and the belief that responsibility is a trick that our AI dogs1 can easily be taught. The way we currently talk about responsible AI abstracts away human responsibility and power. It reduces AI ethics to a set of technical features rather than seeing it as a large-scale conversation humanity needs to have. Yes, humanity, not just the Silicon Valley elites and various “grandfathers of AI”. Historically, leaving big decisions in the hands of a few privileged men hasn’t played out well for everyone.

So, I believe it’s time for all of us to speak up and examine our own responsibility. Join the conversation on responsible AI, regardless of whether you’re directly involved in the development and deployment of AI tools or not. Microsoft is shipping GPT-powered tools to every Windows 11 PC in the world, Google is bringing generative AI to every Gmail inbox. They’ve made AI everyone’s business.

This is why I’ve decided to launch the Responsible AI Pledge Challenge – together with my co-conspirator Daniel Hartley –, as part of ResponsibleTech.Work. The challenge is based on Pledge Works, our open-source tool that aims to embed responsibility in everyday product decisions.

The challenge is pretty simple: choose a pledge period, examine your context, write and share your responsible AI pledges, follow them, and review them at the pledge checkpoint at the end of the chosen period. You can learn more about each of the steps in the challenge announcement.

Diagram of the Responsible AI Pledge Challenge iterative cycle. The stages are: 1 Choose a pledge period: Set a timeframe to meet your Responsible Al pledges. 2 Explore your context: What is your position on Al and what are you missing? 3 Pledge, share & challenge: How can I contribute to the Responsible Al debate? 4 Follow your pledges: Pin your pledges as you try to honour your commitment. 5 Pledge checkpoint: Review & share your pledge progress and decide on next steps.

My #ResponsibleAIPledges

Obviously, it wouldn’t be very responsible to launch a challenge without accepting it, so I’d also like to add my two cents to the conversation.

My pledge period and context window

This week we’re getting the first heat wave in Ljubljana, marking the proper beginning of summer. Given that I don’t handle the heat that well and that I’m going through several big life changes this year, I’ve decided to give myself three months for the challenge, with the pledge checkpoint earmarked for September 20, 2023.

Here’s a bit more info on the context I took into consideration when writing my pledges: I’ve been following the field of machine learning for a couple of years and recently started learning more about LLMs (Large Language Models) to deepen my technical understanding. I’m good at exploring emerging tech trends and helping people learn about technology. I’ve already posted a blog post with resources on generative AI that I recommend most often. Professionally, I’ve been focusing on tech ethics and responsible tech work for the past two years. Before that, I focused on demystifying programming through initiatives like Europe Code Week and promoting a more diverse tech industry. Given all of this, I think my contributions to the responsible AI debate can be focused on education and helping people understand AI better.

I also want to fill some knowledge gaps I have. One particular topic I wish I knew more about is the environmental impact of LLMs, as the ongoing climate crisis is a big source of anxiety for me and, obviously, a big existential threat. I recently came across the estimation (link courtesy of the Green Software Foundation newsletter, which you should subscribe to) that tools like ChatGPT use almost half a liter of water to answer 20 prompts. Multiply that by the estimated 100 million monthly active users and give yourself an existential headache. Hence, I also decided to write a pledge that will help me learn more about this topic.

My pledges

🌱 I pledge to explore different mediums and channels to help more people understand the biases and limitations of tools like ChatGPT and improve their AI literacy.

I’m not interested in creating another explainer of how transformers work – there are plenty of those around –, or helping people “unleash productivity” through prompt engineering (eye roll). Instead, I think there’s a need for more accessible content that helps people develop critical thinking skills.

Currently, my primary medium is writing, and my primary channel is my blog. As part of this pledge, I’d like to experiment with other mediums (especially visual ones) and channels (such as TikTok) to reach people who don’t typically read blog posts. I already have some ideas of how I might use LEGO bricks in this process, but I’m open to other tools as well.

🌱 I pledge to learn more about the environmental costs of using LLMs and share what I’ve learned through different mediums and channels.

This means doing more research on the topic, joining more discussions in the ClimateAction.Tech community, and bringing sustainability to the forefront of weekly ResponsibleTech.Work discussions.

🌱 I pledge to be transparent about how and when I use AI tools.

And finally, I decided to add a simple accountability pledge to remind myself that it’s a good practice to disclose when AI tools are used. Doing so can also support the AI literacy pledge. I don’t generally use AI tools for writing, but I do often use ChatGPT for brainstorming and even getting feedback on text.

In this post, I used LanguageTool, an AI-based style and grammar checker, to improve my writing and catch typos, and ChatGPT to get feedback on the blog post after I already finished it editing, but I didn’t use any of its suggestions.

Your turn

And now I am passing the responsibility baton to you, dear reader. I hope you will join the challenge, write some pledges of your own or remix those that resonate with you, and, most importantly, have discussions about responsibility in AI development and deployment that recognize the power we all have, even if “only” as consumers and voters. LLMs are reading everything we write and share publicly, and that by itself holds power and responsibility. Don’t let the grandfathers of AI and billionaires convince you they’ve got all the answers!




  1. Well, we all hope LLMs are dogs, but we’re afraid they’re really cats that aren’t really domesticated