Any suggestions on places to start for lawyers, privacy officers, or compliance leaders to get more “in the weeds” on this stuff? Are there research orgs you’d rec getting involved with?
Hi Rachel! Okay so, real talk? I have not found much digestible materials on Alignment or Interpretability for lawyers. I am simply reading the research, learning the basics and bridging the gap between the LLM engineering work and the legal work by... studying.
So, not to be pretentious, but *keep an eye out for future posts* on my Substack / Linkedin, because providing "in the weeds" material is my main focus for my content :D.
I can recommend the AI Safety Fundamentals' fast track courses: https://aisafetyfundamentals.com/. They run 5 day intensive courses or 12 week ones on either technical AI Governance (Alignment, interpretability, deployment & capabilities) and also Policy & Legal.
They also have a great and digestible newsletter :).
Aside from this? I follow key alignment voices on X and Substack, Automation voices too. And I follow the latest research trends in arXiv, Anthropic's research page (And also DeepMind and OpenAI, along with other smaller labs).
It is a lot, and I realise that there aren't a lot of people curating these concepts for DPOs and lawyers. So... I am rolling my sleeves :).
YES! 👏👏👏 “AI privacy isn’t just about what data goes in and what comes out, it’s about how models learn, reason, and recall. Deletion requests don’t work cleanly in deep learning, but interpretability research can help us build models that never memorize sensitive data in the first place.”
Any suggestions on places to start for lawyers, privacy officers, or compliance leaders to get more “in the weeds” on this stuff? Are there research orgs you’d rec getting involved with?
Hi Rachel! Okay so, real talk? I have not found much digestible materials on Alignment or Interpretability for lawyers. I am simply reading the research, learning the basics and bridging the gap between the LLM engineering work and the legal work by... studying.
So, not to be pretentious, but *keep an eye out for future posts* on my Substack / Linkedin, because providing "in the weeds" material is my main focus for my content :D.
I can recommend the AI Safety Fundamentals' fast track courses: https://aisafetyfundamentals.com/. They run 5 day intensive courses or 12 week ones on either technical AI Governance (Alignment, interpretability, deployment & capabilities) and also Policy & Legal.
They also have a great and digestible newsletter :).
And, of course, the CAIS' newsletter on substack: https://newsletter.safe.ai/.
Aside from this? I follow key alignment voices on X and Substack, Automation voices too. And I follow the latest research trends in arXiv, Anthropic's research page (And also DeepMind and OpenAI, along with other smaller labs).
It is a lot, and I realise that there aren't a lot of people curating these concepts for DPOs and lawyers. So... I am rolling my sleeves :).
YES! 👏👏👏 “AI privacy isn’t just about what data goes in and what comes out, it’s about how models learn, reason, and recall. Deletion requests don’t work cleanly in deep learning, but interpretability research can help us build models that never memorize sensitive data in the first place.”
So much to learn, particularly the new language for understanding, describing, and solving what's at stake. Will be staying tuned to learn more.
@Stepanie Holland- always feel free to DM me to exchange ideas, happy to help :).