Discussion about this post

User's avatar
Katalina Hernández's avatar

Just a very clear note:

I'm not dismissing the importance of alignment work.

It's more about the Governance action gap I see as a lawyer/ DPO.

And, definitely something that people like me (non engineers) should really be learning about.

Expand full comment
Carey Lening's avatar

Hey Katalina --This is helpful, though I've been thinking about the the implications and feasibility of the dashboard idea in particular, and I'm still not sure even after reading this post, how it's actually achievable. I'm almost done with my post and will share it with you before I publish.

I'm not just trying to shut down your ideas -- in general, I share your concerns about how little human autonomy is even being considered at all -- but part of what makes this whole debate so challenging is that often the complexity of systems (algorithms, models, and 'AI' more broadly) get reduced down into to generalities, and those generalities are what guide the rules being made.

Ignoring complexity is one reason I think that "privacy theatre" is so rampant. It's frustrating because we really should be precise when it comes to developing solutions to large-scale problems like ensuring human autonomy or privacy.

I use an analogy in my post, where I suggest substituting 'AI' with 'people' and 'inference/visibility profiles' with 'mind-reading'. In isolation, gaining visibility of inferences for some AI models/people might be achievably possible, and useful for individuals.

But 'AI' like algorithms or people aren't singular things; often multiple models and systems feed into one another, just like inferences about you may be collective. And when you look at all the systems, models, and algorithms that make behavioral and other inferences about us, which often change dynamically (for a bunch of reasons), it quickly starts to get overwhelming.

Imagine you develop a skill to read minds and deduce the inferences of your friends and random strangers. Even if you can focus on just the thoughts and inferences related to you, there are still lots -- and I'm not sure most of them are all that helpful to know. I argue that the same is true with regard to most AI systems, and that it would quickly become an overwhelming, intrusive nightmare to make people responsible for monitoring, assessing, consenting, or objecting to inferential decisions made about them.

Anyway, I'll share the larger post with you first, and I'll be curious to see how you consider the problem after reading it. You might also get some value out of reading my post on fractal complexity.

Expand full comment
6 more comments...

No posts