AI Agents in Dating Apps: Sociological Risks of Optimising Human Connection
From Istanbul to Nairobi to New York, AI agents are entering dating apps. What does that say about the future of relationships and human agency?
Hello everyone,
I was checking my daily reads today and I stopped at a Wired piece about AI agents entering dating apps. Think about it,
It is a Sunday morning in Istanbul, Nairobi, Berlin or São Paulo. You open your dating app. The interface looks familiar: profile photos, short bios, emojis, offers of coffee and conversation.
Then a small notification appears:
“Your AI assistant has pre-screened 237 profiles. Here are 3 highly compatible matches.”
You never swiped on these people. Somewhere on a server you will never see, your AI agent has already done the early work of dating for you.
This is not a distant scenario.
Hinge uses an AI-powered tool to shape how users answer prompts,
Facebook Dating is testing a “dating assistant” to brainstorm ideas,
Volar even let people train AI versions of themselves that flirted with other people’s AI as a form of pre-date screening.
Fate launched in London as what it calls the world’s first agentic AI-powered connection engine,
Known, raised funding on the promise that an AI onboarding call could produce introductions where a large share turn into in‑person dates
Some people call this “pre‑dating.” Your agent talks to other agents while you sleep, then wakes you with a shortlist of candidates who supposedly match your values, habits and energy. At that point, dating starts to feel less like romance and more like a recruiting process.
So here is the core question for this week’s TechLetter:
What are the sociological risks of trying to optimise human connection, and what does agentic AI in dating apps tell us about where we are heading?
I will try to answer this through four lenses: ethics and safety, regulation, inequality and the loneliness crisis.
Why do we even want AI agents in dating?
The 2025 Singles in America study reports that about
26% of US singles already use some form of AI to help with dating (for GenZ it is 50%),
AI use in dating has jumped roughly 333% in just one year.
A separate survey by Norton and others finds that around six in ten dating app users believe they have encountered at least one AI‑written conversation.
In Europe, the picture is different but equally telling.
Around 28% of European adults aged 25 to 34 use dating apps, rising to about 35% in the UK.
Last year, Match Group rolled out its AI “Matchmaker” feature across Europe, using on‑device inference to comply with GDPR limits and avoid sending extra personal data to the cloud.
At the same time, discomfort is visible in the data. In the Match/Kinsey survey, 44% of respondents say using AI to alter photos is a dealbreaker, and 36% say the same about using AI to generate entire conversations. An analysis of roughly 2,850 user reviews from Trustpilot, app stores and Reddit finds that 89% of complaints mention “algorithm manipulation,” “shadowbanning,” or “pay‑to‑win visibility.”
Underneath the numbers is a simple reality: people feel overloaded, lonely and tired of endless swiping, so they ask AI to take on some of the cognitive and emotional work of dating.
So we are sending two messages at once:
“This is too much. Please help.”
“But do not replace me. I still want this to feel human.”
To me, that tension is not just a product design challenge. It is a response to a very specific social moment: a world where online dating is normalised, many people feel overloaded and lonely, and trust in what is “real” online is fragile. When you put all of this together, a set of sociological risks comes into focus. Optimising away the hard parts of interaction can also weaken the skills we need to relate to each other.
Let’s walk through those risks.
1. Emotional “muscles” and low‑risk intimacy
Human relationships are naturally full of friction. Misunderstandings, delayed responses, bad jokes, awkward silences, fear of rejection, small conflicts and the effort to repair them.
There is a historical parallel here. When the telephone became widespread in the early twentieth century, critics worried that it would destroy the art of letter writing and make human interaction shallow. They were partly right. We did lose something. But we gained new forms of intimacy too: late‑night calls, long‑distance relationships kept alive by voice. Crucially, the telephone still required you to speak, to stumble, to find your own words.
Agentic AI steps in at exactly that point of friction. It promises to smooth out the parts of dating that feel most draining: the endless filtering, the awkward first messages, the long stretches of small talk that go nowhere. Instead of you deciding who is “worth” the effort, the system can pre‑select likely matches, draft the opening lines, keep the chat flowing when you run out of things to say, even send a gentle “no” on your behalf. For someone who is already tired or overstimulated, that offer can feel less like a gimmick and more like a relief.
Early studies on AI companions suggest that chatting with an AI can ease feelings of loneliness in the short term, sometimes in ways that feel surprisingly close to talking to another person. At the same time, evidence points to a pattern: light or occasional use may help, but heavy daily reliance is linked to more social withdrawal and weaker offline ties, especially among younger users who increasingly see AI as a viable stand‑in for a romantic partner.
Sociologically, my reading is this: the more emotional labour we hand over to AI, the less often we exercise our own “emotional muscles.” If agents keep absorbing awkwardness, rejection and small hurts on our behalf, we get less practice at tolerating them. Early interactions may feel smoother, but when things get tense or painful, we are more brittle. And that brittleness does not stay in dating. It shows up in how we handle conflict at work, how we argue about politics, and how we hold friendships together under pressure.
2. Turning relationships into optimisation problems
Agentic AI treats dating as an optimisation task: find the “best” possible match, in the shortest possible time, with the least possible friction. That logic fits neatly into the wider platform economy, where everything from transport to news is already routed through ranking and recommendation.
History offers a useful contrast. For centuries, many cultures relied on arranged marriages. A dense family network assessed compatibility based on values, economic standing and social fit. The process was slow and often unfair, but it was also communal and deeply human. Grandparents, neighbours and cousins brought their own biases, but also their intuition, their “I have a feeling about this one” moments. It was, in its own way, a messy analogue recommendation system.
AI agents are building something structurally similar, only with the human parts stripped out. There is no grandmother watching body language at a dinner table, no friend noticing how someone treats a waiter. The system sees patterns in data, not people in context.
Online dating had already turned partner search into a marketplace, with filters for age, distance, education, religion and interests. Agentic tools push this further. Compatibility is inferred not just from profile text but also from how you write, how quickly you reply, and how you behave on the app. Models predict which pairs are likely to “work” and quietly rank them higher. As those systems get better at anticipating what we will say yes to, the space for genuine surprise shrinks.
The sociological risk is that relationships stop being open‑ended experiences and start to look like projects to manage and optimise. Stories like “we grew into love over time” or “they were not my type on paper, but something happened” become harder to sustain in an environment that constantly nudges you back toward predicted fits.
The tools do not outlaw serendipity, but they tilt the floor. They invite us to think about love through the lens of filters and performance rather than chance, ambiguity and co‑creation.
3. Authentic self and the risk of auto-catfishing
When two AI agents talk to each other and arrange a date, who exactly is meeting whom?
A Washington Post story captured this dynamic neatly. A man matched with someone on a dating app who sent long, multi‑paragraph messages, acknowledged each of his points and wove in details he had mentioned before. In person, his date had none of the conversational energy she had shown over text. Scientific American calls this phenomenon “chatfishing”: a new form of deception where people use AI to conduct conversations on their behalf.
As these tools get better, it also becomes tempting to present a politically correct, optimised version of ourselves at all times: smarter, kinder, more patient, more aligned with our stated values. The agent then looks for a similarly optimised other. Yet if we are honest, many of the relationships that matter most to us did not start from our “best‑behaved” selves. They started in the throwaway lines, the slightly clumsy jokes, the small contradictions that slipped past our self‑editing.
The agent negotiates based on that aspirational self. The other agent does the same.
So two idealised versions of people may agree to meet. The humans then have to live up to the promises their agents already made.
This creates a new form of misalignment. We could call it “auto‑catfishing.” Not straightforward lying, but gradually believing our own polished self‑description. The pressure to perform the “agent version” of yourself in real life can fuel anxiety and a constant feeling of not being enough. This widens the gap between digital self and lived self.
Also, we are accepting that even in our romantic lives, a system in the background can tell us what we want and who fits. In practice, it means that we are letting AI not just curate our feeds, but quietly arbitrate our desires. And once we accept that in something as intimate as love, it becomes harder to argue that AI should stay out of any other part of our lives.
4. Delegating responsibility and the erosion of empathy
Dating culture already struggles with ghosting, choice overload and unequal emotional labour. Agentic AI looks like a partial fix. An agent can decline on your behalf, end a dead‑end conversation, and help detect or block abusive behaviour early. That is not cosmetic; it can genuinely protect people, especially women and other vulnerable groups, from some of the worst online experiences.
But there is a line. Levinas wrote that ethics begins with the face of the other, with the moment you recognise someone as a person who can be hurt. AI layers do not erase that face, but they make it easier to look away. The more often we outsource those moments, the less often we stand in that uncomfortable ethical space ourselves.
And this does not stay in dating. Once we get used to avoiding relational discomfort by delegation, it becomes easier to avoid the emotional work of apology, repair and disagreement in our friendships, workplaces and political life too.
5. Class, code and a new kind of social stratification
Agentic AI can also create a new layer of inequality.
Even today, people with more money and time tend to get better results in online dating. They pay for premium features, invest in professional photos, or hire profile coaches. In an agentic future, that gap can widen. Affluent users will be able to pay for more capable AI agents, trained on richer data, with more persuasive language and more advanced matching logic. Platforms can bundle “AI matchmaker” features into higher subscription tiers and quietly give those users more visibility. People with higher digital literacy will be better at tuning their agents and reading the signals the system responds to.
Market analysis of the European dating sector already shows that paying users account for a large share of revenue, and premium tiers are forecast to grow, driven by AI matching, video calls and priority visibility. At the same time, independent review sites show a sharp gap between marketing and lived experience. Apps that sit at 4.0 or higher in app stores are rated as low as 1.2 to 1.5 out of 5 elsewhere, with complaints dominated by algorithm manipulation, shadowbanning and pay‑to‑win visibility.
We have seen earlier versions of this. When personal ads moved from newspapers to the web in the 1990s, early adopters with internet access and digital skills had an advantage. Over time, that gap closed as access spread. AI‑powered dating can reopen it in a more durable way, because the edge is no longer just “are you online?” but “how strong is the agent that represents you?” It is not only about connection to the network, but about the quality of your digital proxy.
In that world, the question quietly shifts from “who are you?” to “what kind of agent are you running?” For many emerging markets, where income and digital literacy gaps are already deep, building a romantic ecosystem on top of “agent quality” risks hard‑coding inequality into a very intimate part of life.
6. The regulatory blind spot: social scoring, profiling and the transparency gap
Now we move from sociology to governance.
At a technical level, AI agents in dating apps score and classify people. They evaluate users based on behavioural data, inferred traits, communication patterns and past interactions, then use those scores to rank, filter and decide who gets to see whom. In most other contexts we would call this profiling. In some contexts we would be comfortable calling it social scoring.
The EU AI Act, which started to apply its prohibitions on “unacceptable” AI practices in early 2025, explicitly bans certain forms of social scoring. Article 5(1)(c) targets AI systems that classify people based on their behaviour, socio‑economic status or personal characteristics and then use that classification to impose unjustified or disproportionate negative treatment in different contexts.
Guidance from the European Commission and analysis by groups like the Future of Privacy Forum make it clear that this includes using aggregated behavioural data to restrict access to services or benefits.
The AI Act does not ban all scoring. And, dating apps are not banks or welfare agencies. They do not decide who gets a mortgage or who receives social assistance. But the underlying mechanism is structurally similar. An AI system evaluates you based on your data, assigns you an implicit compatibility or “quality” score, and that score shapes your access to opportunities for connection. In a world where loneliness has measurable health impacts, that is not entirely trivial.
The hard question is whether an AI agent that systematically excludes certain users from your feed based on inferred traits, and does so in opaque ways, begins to cross that line. We do not have case law on this yet.
GDPR raises a parallel concern. Article 22 gives individuals the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal effects or similarly significant effects for them.
If your agent filters out 234 of 237 profiles before you ever see them, that is an automated decision with real consequences for the people who disappear from your horizon.
In the context of a loneliness epidemic, it is at least arguable that systematic exclusion from social opportunities can “significantly affect” people’s lives.
Most dating apps do not clearly disclose the extent to which AI modifies interactions, ranks profiles and crafts messages. They rarely offer an easy way to switch off AI‑generated features without also losing access to core functions.
There is also the question of manipulative design.
The AI Act prohibits systems that use subliminal techniques or exploit vulnerabilities to materially distort behaviour in ways people would not ordinarily accept.
If a dating app quietly tweaks visibility or match frequency to nudge users toward paid tiers or higher engagement, without explaining how or why, that starts to look like a manipulative practice under both the AI Act and the EU’s Unfair Commercial Practices Directive.
None of this means AI in dating should be banned. It does mean that we are building intimate AI systems in a regulatory grey zone. The people most affected by them, the users, often have no meaningful insight into how these systems work or what data drives their decisions.
The loneliness and disconnection paradox
In 2025, the WHO Commission on Social Connection reported that about one in six people worldwide experience loneliness, and that loneliness and social isolation are linked to roughly 871,000 deaths every year – around 100 every hour.
In Europe, an EU‑wide survey found that 13% feel lonely most or all of the time, and 35% at least sometimes, with two in three 18‑ to 24‑year‑olds describing themselves as lonely. OECD data adds that people meet in person less than they used to, and that lack of social connection often overlaps with economic disadvantage.
AI companions and agentic tools step into this gap as “low‑risk intimacy”: presence without rejection, engagement without serious conflict, connection without full vulnerability. Agentic dating applies that logic to human relationships, absorbing much of the risk and friction before we arrive. The danger is that, in trying to escape loneliness, we also outsource the very relational processes that make us feel alive and seen – and that those who most need human connection are the ones most exposed to its AI substitute.
So, how far do we want to delegate our right to choose and be chosen?
Is this a reasonable price to pay to reduce burnout, improve safety and make dating less chaotic? Or is it an early sign that our tolerance for other humans, in all their messiness, is quietly shrinking? I suspect it may be both.
For me, the key question is not whether AI agents should exist in dating at all. It is which human capacities we refuse to delegate, even when delegation is technically possible.
Filtering spam emails, managing calendars, summarising meetings: these are easy to outsource. But what about making the first move? Apologising after a hurt? Ending a relationship clearly and kindly? Sitting with awkward silence on a first date?
Personally, I am comfortable with AI helping me stay safe, summarise information, and occasionally highlight options I might have missed. I am much less comfortable with AI silently deciding who I will never even see, or speaking for me in the moments that shape who I am.
I would love to hear how you see this, especially given that TechLetter now has readers in more than 95 countries, with very different dating cultures and norms.
Would you be comfortable letting an AI agent choose “the right person” for you, based on your data and preferences? Does that feel like a modern form of matchmaking, or like replacing serendipity and fate with an algorithm?
If you feel like replying to this email, you can keep it as simple as one line:
“In my dating life, the one thing I would never outsource to AI is: …”
Your answers will probably differ from Lagos to London, from Mumbai to Madrid. That is exactly why this conversation needs to happen now, before “pre‑dating” becomes just another default we slip into without really noticing.
💬 Let’s Connect:
🔗 LinkedIn: [linkedin.com/in/nesibe-kiris]
🐦 Twitter/X: [@nesibekiris]
📸 Instagram: [@nesibekiris]
🔔 New here? for weekly updates on AI governance, ethics, and policy! no hype, just what matters.




All the things happening that we wish were not happening
Wow, this article was great. I didn't even know that AI was associated at the dating level. It's almost like a virtual personal assistant. A personal assistant for your life, and the virtual assistant is not even human. This article was full of a lot of rich ideas. I can't imagine AI dating someone for me, that doesn't make sense. How does that work when the two individuals have to meet in real life? When using AI dating, do users consider how their continued interactions would feel like beyond the app, or was dating supposed to only be an app activity? That's a question worth sitting with too.