Reflections from SXSW on AI, rights, and what we are becoming, by Luisa Ortiz Pérez.


At SXSW, we tried to name something that is still difficult to fully grasp.
We called it the digital snake.
The name comes from an Indigenous prophecy—of a black snake that would travel under the earth and poison the water. Years ago, it was used to describe oil pipelines. Today, it feels impossible not to see the parallel: systems that move invisibly, extract relentlessly, and leave behind contamination that is harder to trace, but just as real.
Only now, what is being extracted is not just land. It is data. It is attention. It is our emotional lives.
And we are still trying to understand the cost.
I was there not just as a speaker, but as someone who spends most of her time listening to people who are already living the consequences of this shift.
At Vita Activa, we offer emotional and psychological support to people experiencing digital harm. Journalists, activists, responders, people who are harassed, surveilled, exhausted. People who are holding too much.
And one of the things we shared at SXSW is something that continues to sit heavily with me:
we are missing people.
Not because they don’t need support—but because we are not there when they reach out. Nights. Weekends. Moments of collapse that don’t follow business hours. Almost 60% of the cases we could not attend happened in those gaps.
This is what the “AI conversation” often forgets: harm does not wait for systems to be ready.
So we built something very simple. Not a chatbot. Not an intelligent system. An answering machine.
Her name is Nina.
She doesn’t pretend to be human. She doesn’t learn. She doesn’t simulate care. She holds a line—just enough to remind someone they are not alone, just enough to connect them to a human when one becomes available.
It is a modest intervention. But it comes from a very clear position:
technology should not replace care. It should carry it carefully, when we cannot.
But the conversation at SXSW was not only about what we are building. It was about what we are becoming.
At one point, someone in the room—someone from the tech industry—said something that stayed with me. They talked about how fast everything is moving. How rarely anyone stops to ask: should we be doing this?
There was no cynicism in their voice. Just a kind of fatigue.
Because the truth is, most people building these systems do not believe they are causing harm. They are working. They are solving problems. They are trying to keep up. Sometimes they are just trying to survive.
And yet, harm is happening anyway.
Like the activist in an African country where being LGBTQ+ is criminalized—targeted with AI-generated images that put their life at risk.
Or the journalist in the Philippines, flooded with tens of thousands of coordinated attacks a day, testing techniques that would later be used elsewhere.
Or the quiet, almost invisible cases: a partner tracked through a device that was marketed for safety. A cycle monitored, a purchase not made, a question triggered in a system that was never meant to ask it.
There is a pattern here.
The same technologies that promise protection are often the ones that enable control. The same infrastructures that create connection are amplifying isolation.
And somewhere in the middle of all this, people are trying to make sense of what they are giving up in exchange for convenience, efficiency, or simply participation in modern life.
One of the most uncomfortable realizations in the room was this:
we are being asked to carry too much of the responsibility.
We are told to be mindful. To protect our data. To verify what we read. To choose better platforms.
But how do you make meaningful choices in a system where power is so concentrated? Where a handful of companies shape how information flows, how relationships are mediated, how reality is constructed?
The conversation has shifted—from collective responsibility to individual behavior.
And that shift is not accidental.
At the same time, there were glimpses of something else.
A story from Venezuela about AI-generated news anchors—fictional personas delivering real information, immune to imprisonment or harassment. A fragile but powerful form of protection.
Or the insistence, again and again, that encryption must be preserved—not as a technical preference, but as a condition for dignity and safety.
Or the idea that data governance should not belong only to corporations or states, but to civil society—to people who can hold both accountability and care.
These are not solutions. But they are openings.
And still, I find myself not in hope, but in something else.
I called it, perhaps half-jokingly but not really, dignified rage.
Because I am watching what this is doing to people.
To the teenagers who are growing up mediated by systems that do not understand them.
To LGBTQ+ individuals who cannot find safe spaces, even online.
To survivors of violence whose lives can be mapped, tracked, predicted.
To journalists who no longer feel safe telling the truth.
We were told that technology would connect us.
But we are more alone than ever.
We were told that it would make life easier.
But the emotional load has not disappeared—it has just shifted, often onto those least able to carry it.
So the question I am left with is not whether AI is good or bad.
It is much simpler, and much harder:
What kind of world are we willing to build around it?
One where speed continues to outrun reflection?
Where extraction is normalized as progress?
Where care is an afterthought?
Or one where we are willing to slow down, to regulate, to organize, to insist that human dignity is not negotiable?
We are not at the end of this story.
If anything, we are in a moment of tension—a pendulum that has swung hard in one direction.
Maybe it will swing back. Maybe it won’t.
But what feels clear to me is this:
Care cannot be automated away.
Rights cannot be retrofitted after harm.
And the future of AI will not be decided only by those who build it—
but by those who are willing to question it, resist it, and reshape it.

Vita-Activa.org is a helpline for journalists, activists, women, LGBTQIA+ people, and human rights defenders who are facing online gender‑based violence, stress, anxiety, chronic exhaustion, trauma, and distress. Our services in Spanish, Portuguese, English, and Arabic (by appointment) are free, confidential, and anonymous.
Vita Activa provides psychological and digital first aid, holistic crisis management, and support for strategic decision‑making.
Find us at [email protected] (Spanish) and [email protected] (English) | @VitaActivaOrg (FB/TW/IG/TikTok/BlueSky) | +569‑3291‑9018 (WhatsApp/Signal)