AI-Savvy Humanists

The evolution of designers in the age of algorithms

user image

Mara Pometti

17/11/2023; 8 min read Matters

AI is changing the way we interact with each other, consume content, learn new information, and experience the world. Organizations rapidly deployed generative AI tools in the past year as the latest State of AI report by McKinsey shed light on. How this technology is going to waive its outcomes, decisions, and tasks in our daily life safely and meaningfully is an open question. Are we ready to officially welcome AI into our lives? It’s like someone has dropped in our hands a super powerful tool without any instructions, leaving us all wondering how to make the best out of it. Design will help us solve this enigma.

Design that, by definition, is a human-centered approach to problem solving, will play a crucial role in shaping our relationship with AI. The focus on people’s problems will inform the means, the experiences, and the boundaries we will create to make sure that models and algorithms align with our human values, needs, and civil norms.

I should have started this article with a disclaimer: I’m not a designer by training, but an AI expert who, over the years, has deliberately embraced design as a tool to better work with AI. Approaching design as a method, as opposed to the intent of what I do, made me realize that there are plenty of opportunities that lie ahead for designers to be key actors in the creation of AI systems. What designers bring to the table is hundreds of years of culture and thinking on how to integrate technologies into people’s lives to meet future or existing needs. Technology, when designed for the good, aspires to make human life better. Yet, to make technologies accessible to human beings we need design. This holds true especially with AI. To expand the use, intention, and meaning of AI, designers must master AI.

To envision new possibilities to solve problems for people with AI, designers should acquire new skills in domains that they are not necessarily familiar with, such as AI engineering, data science, software engineer, responsible AI, data visualization… The concept of design in the age of AI should evolve as designers will progressively become the enablers of making AI meaningful, responsible and transparent. AI is giving us the opportunity to reconcile the fracture between the tech disciplines and the humanities. This convergence is embodied in designers, whose evolution I see as “AI-savvy humanists”.

Reflecting on my journey — on my “imperfect and still in progress” love story with AI — I came up with some of the most important lessons I’ve learned using a human-centered, meaning a design-led approach, in AI in the hope to create a path forward for designers that want to become AI-savvy humanists. So, what does this path entail?

Data first, AI second

It’s the data that fuels the AI. We interact and communicate with AI through data. To work with AI we must first be data literate. My work in AI started with data and because of data. In fact, my disciplinary home is data journalism. I’ve been using data as a language to conduct research and experiments, make sense of AI, test hypotheses, and write visual stories with the goal to make data, and code accessible to people.

I believe there is nothing more human-centered than the effort to capture the human stories hidden in data and algorithms.

A few years ago, the passion for data journalism led me from the media to the tech industry when I joined a team of data scientists as the only data journalist — a very unusual career path! Yet, it was the courage to inhabit a world that I didn’t belong to, that made me enter the AI field through my data storytelling and data science skills. AI has been shaping my career ever since.

My new mission around how to utilize data journalism in AI became clear: translating the algorithm’s output into actionable knowledge through visual data stories that could facilitate the understanding of the AI’s complexity and humanizing AI.

I consider data storytelling an integral part of working with AI. Data stories unfold the AI’s complexity and chart an understandable and actionable path to its strategy.

Look under the surface

There is an immense opportunity to apply design methods that extend further beyond traditional design practices. However, design in the context of AI is usually confined merely to product, UI, or UX design. Yet, this is just the tip of the iceberg. The impact of design in AI is broader than that. When we consider people in the process of designing AI systems, we should go beyond the narrow idea of users, and explore humans holistically, not only from the dimension of the action — as users — but also from the dimensions of thinking and feeling.

Designers can adopt their human-centered mindset to systemically think of humans across all their aspects to find solutions to the many new challenges that AI poses and that need to be addressed from a human lens like fairness, accountability, regulations, risks mitigation, the alignment problem, transparency, operationalization, governance, AI monitoring. AI needs design for risks, for uncertainty, for accountability, for data. It needs a whole set of new design considerations.

When done well, design connects people to technologies and rebalances power dynamics. This is where the opportunity for designers in AI lies.

Start from AI Intents

It’s extremely tempting to jump on models with the assumption that we already know what the intent of AI is and we can take any model off the shelf to build an application, especially considering how easy it is to access and use LLMs.

There is a question we should always ask before getting into “how” to use AI and this question is: what’s its intent? Understanding the intent behind a machine guides the creation of meaningful outcomes.

Today, with services like GPT, there is a common misconception that you just need to access a model through an API, and magic happens. The truth is that there is nothing simple with AI. There are many human-led questions that should inform AI development.

For example, human centered questions informed the design and technical requirements of an AI solution my team and I built to speed up customer issues resolution at the call center. What data do we need to return the agent’s reliable information? How do we make sure that the model produces trustworthy content aligned to users expectations? How do we craft prompts that interpret the questions and needs of customer agents?

These questions led us to define the AI’s intent: helping customer agents instantly find information they need by prompting a question.

Human-centered approach relies on capturing human needs into clear intents that we turned into technical requirements. Behind every technical choice there is a human-led question.

Test your hypotheses (always!)

Building AI systems just for the sake of AI, without centering their design and testing on humans never works.

The best way to make AI solutions accountable is by critically evaluating data and models in continuous iterations with rigorous scientific and human-centered methods. This way our findings and observations evolve and, as far as they evolve, we can get the models closer to the truth.

To orient my and my team’s thinking around testing, by critically reviewing existing methods and academic papers, I created a series of human-centered frameworks. These frameworks were made to select the appropriate benchmarks to test the model’s hypotheses based on the data sample used to evaluate whether the model’s outcomes were aligned to the users’ wants and needs.

The process of testing the AI’s outcome, especially with LLMs, is crucial to align with user intents. Identify model’s failures and plan for means to resolve those failures and improve the model.

Build a compass for AI

Testing is crucial, but the problem is that this technology is still very mysterious to us, we don’t know exactly how LLMs work. There is still a huge lack of transparency with current LLMs. Ultimately, a human-centered approach to AI should come down to ensuring people’s understanding of AI by unfolding its complexity with transparency and trust to drive its adoption.

As generative AI enters our common awareness, together with its well-known risks such as biases, toxicity, hallucinations, misinformation, we can expect people will request more and more for transparency and guardrails.

Even if LLMs get better at sticking to the common knowledge, AI is not accountable. There is a massive need for designers to bring accountability into the AI development equation.

We can’t completely eliminate the AI risks — at least not yet! However, we can devise ways to control and mitigate them. We cannot plan a straightforward journey, but we can build our own compass orienting ourselves in the AI world and helping us keep models on track.

Explainability, transparency, risk mitigations are not only regulations issues: they also involve technical challenges. A human centered approach should be applied to inform the technical decisions made over the development phase with the goal to eventually help people know how AI systems work, so they can trust and adopt them.

Be an AI-savvy humanist

We need a new generation of experts that can humanize AI by bringing it into our lives responsibly: the AI-savvy humanists.

The evolving role of designers will produce experts combining a human-centered mindset with data and AI literacy. They will understand the models’ outcomes, see new opportunities in AI, and ask critical questions based on an accurate interpretation of the data returned by AI systems. By doing so,these experts will make AI come closer to our own humanity and values.

The AI-savvy humanists will illuminate AI by pushing back false narratives generated by biased algorithms by using data to back up hypotheses and challenge assumptions. They will be the ones capturing the uniqueness of human beings and encode it into the large general thinking of the AI models to preserve our own individuality and free will.

Mara Pometti gave a talk at the digital design conference Design Matters 23, which took place in Copenhagen & Online, on Sep 27-28, 2023. Watch her talk below.

For more talks, subscribe to

If you want to connect with Mara, find her on LinkedIn, or visit her website

Did you enjoy the article? Share it on