Discussion about this post

User's avatar
Nicholas Bronson's avatar

You've made a convert out of me. I still think you need to add a third pillar though - Accessibility. It's not enough for control to be present and overridable if control of the technology remains locked behind commercial walls and primarily in the hands of a very few of the least trustworthy people in our society. You are 100% on the money when you say that the people who are most able to implement these controls are those with the least motivation to do so.

Whilst there are some valid concerns around companies like DeepSeek, the majority of the vitriol towards them appears to be xenophobic and anti-competitive in nature (encouraged, of course, by their western competitors) - they should still be applauded for creating a top tier model and releasing the weights for anyone to make use of as they see fit.

Sadly even if we get regulation it will do little to solve the problem without the will to enforce it, which appears to be sorely lacking. Even with existing and enforced regulations, such as the GDPR we see little actual effect. When companies like Meta are big enough that they can absorb multi-billion dollar fines and still find it more economic to not actually change their behaviour to avoid them.... we'll need to give consideration to more than just creating regulation, but how we ensure it is followed.

Expand full comment
Katalina Hernández's avatar

[Noteworthy insights from inputs brought by @Carey Lening!]

Not all AI inferences require user intervention, just as not all website cookies require explicit consent.

In the same way that necessary cookies are essential for basic site functionality while third-party cookies track user behavior for advertising, AI-driven inferences can be categorized based on their impact on autonomy.

Some inferences (like basic content recommendations on Spotify or Netflix) are functionally equivalent to necessary cookies: they enhance user experience but don’t significantly alter decision-making.

Others, like behavioral profiling for hiring, credit scoring, or political content curation, function more like third-party tracking cookies: they shape outcomes and influence choices without the user’s direct knowledge.

Just as privacy regulations have required transparency around third-party tracking, governance frameworks should ensure that AI systems surface high-impact inferences that could materially affect opportunities, decision-making, or cognitive autonomy, while avoiding overwhelming users with minor, low-stakes personalization updates.

The goal is not to provide users with an exhaustive list of every inference to "micro-manage". It's to ensure they retain control over the ones that shape their life, their reality.

1. AI Profile Dashboard

The dashboard concept isn’t about exposing every AI inference across every system. Yes, that would be awful! It’s about providing visibility and control where AI directly interacts with users and influences their decisions.

Real-Time AI systems already tracking user inferences:

➡️LLMs & Conversational AI (ChatGPT, Gemini, Claude, Copilot)

-ChatGPT already tracks user inferences (via memory), summarizing preferences, behavioral patterns, and reasoning tendencies.

-A user-facing memory profile already exists, it just isn’t fully transparent.

I’ve actually done this experiment. I am a pro user, so I know that Chat GPT stores the memories I allow for, via the “Memory” section.

But, I’ve asked for a list of inferences that it has made about me NOT BASED on what’s stored in the memories, but on PREVIOUS conversations… guess what? It provided said inferences. And I asked for this in a “Clean slate”, brand new chat. Which “shouldn’t be possible” since “Chat GPT cannot access what’s in another conversation”. Apparently, not the explicit data, but the inferences stick, because it’s highly optimized for engagement…

➡️Personalization Platforms that shape cognitive inputs (Meta, TikTok, Instagram)

-Meta’s ad targeting is based on inferred behavioral data. How to forget the Cambridge Analytica scandal? If nothing else, it proved that these inferences can subtly shift political leanings over time.

-TikTok’s algorithmic feed determines which ideas users are exposed to, shaping reality for younger generations whose primary news source is social media, not journalism.

- A "How We See You" dashboard (instead of “AI Profile”) could surface key behavioral traits influencing content visibility.

-This is as much of a transparency problem as it would be a UX problem. And I am learning about this as much as I can from my UX colleagues at work. But what happens if we don’t push?

-The sad part? Corps don’t even need "Cambridge Analytical" to orchestrate cognitive influence manipulation for them anymore.

➡️High-Stakes AI-Driven Decisions

-A hiring platform infering a lack of leadership skills based on pattern-matching with previous applicants should be contested before reaching the recruiter.

-What happens if I am continuously rejected from leadership positions due to an inference (based on similar profiles) that “I lack leadership skills”? Maybe because I didn’t phrase my experience in certain way? Eventually, I start believing that I am not suitable for that kind of position, and I don’t even understand why.

-A credit-scoring AI determining financial reliability from behavioral data should be transparent about its profiling logic. But alas, this is David vs Goliath (I know).

2. Where Inference Transparency is NOT a priority

-Not every AI-driven inference requires user oversight, so the concern that users will be drowning in inference notifications is valid, but only if the system is poorly designed.

-I like how Spotify Wrapped already surfaces inferred listening trends, but users are fine with it because it’s low stakes. It’s personally helped me understand how my music taste changes over time, but it also made me wonder how much of this is due to repetitive loops that my playlists are “curated” for.

-Would anyone else care? Probably not, and I would argue that spotify’s capacity to shape my cognition is not worth the fight.

I am more concerned about end-user-facing AI assistants. And about what happens when AI is embedded into a humanoid form and interacts like a human…

If AI reasoning models remain opaque, we will be blind to how these systems reach conclusions. Which also makes them impossible to regulate effectively.

This is not another version of PbD, it’s just a starting point for how we can bridge the gap between “protecting personal data” and “being aware of how that personal data is used to influence our decisions (or decisions about us)”.

And this is something in AI Safety that regulatory frameworks haven’t caught up with yet.

What I don't like about the AI Act is it wording of "AI systems that manipulate human behavior to the extent of significantly impairing individuals' ability to make informed decisions". Because it limits this to "AI applications that employ subliminal techniques or deceptive practices to alter behavior, potentially leading to harm".

How about AI applications that are so well optimized for engagement that they alter behaviour, but not in a malicious or deceptive way? This could still lead to harm, but since it's not the Deployers' intention then it doesn't count. And, how do we define "harm"? it doesn't end...

What I know is: if we don’t define and implement autonomy safeguards now, they won’t be built into the foundation of AI governance.

Expand full comment
13 more comments...

No posts