AI Intelligence Brief — April 1, 2026
🧠 AI Intelligence Brief — April 1, 2026
Where law meets code meets caffeine ☕
🔧 Tool Updates
Claude Code
Two drops in one day — because one release is for amateurs.
- v2.1.90 🎓 Introduced
/powerupinteractive lessons — Claude Code is now teaching you. The student has become the master... or the other way around? - v2.1.90 🔌 New
CLAUDE_CODE_PLUGIN_KEEP_MARKETPLACE_ON_FAILUREenv var lets plugins survive marketplace failures. Offline-first gang, this one's for you. - v2.1.90 🛡️
.huskyadded to protected directories. Your git hooks are now officially sacred ground. - v2.1.90 🐛 Fixed infinite rate-limit dialog loop. Nothing says "April Fools" like a dialog box that won't stop asking you to slow down.
- v2.1.90 🔄 Fixed
--resumeflag — because resuming should actually resume things. - v2.1.89 🔧 Minor fixes. The unsung hero release.
Codex
📭 No Codex releases today. Resting up for later this week.
💡 Tip of the Day
The new /powerup command is a hidden gem for onboarding. Run it in any project to get interactive walkthroughs of Claude Code capabilities:
# Launch interactive lessons
claude /powerup
# Pro tip: combine with a fresh project to learn
# context-specific workflows
cd my-new-project && claude /powerup
If you're managing a team, this is the fastest way to get juniors productive with Claude Code without writing your own docs.
⚖️ Legal × AI Watch
EU AI Act Article 6: High-Risk Classification — What Actually Qualifies?
Today the EU AI Act's high-risk classification framework is worth revisiting. Article 6 defines two pathways to "high-risk":
- Annex I products — AI systems that are safety components of products already regulated (machinery, medical devices, vehicles). If your AI is steering a car, congratulations, you're high-risk.
- Annex III use cases — stand-alone AI systems in sensitive areas: biometric identification, critical infrastructure, education, employment, law enforcement, migration, justice.
The nuance everyone misses: An AI coding assistant that helps write legal briefs? Probably not high-risk under Annex III. An AI system that decides bail conditions? Absolutely high-risk.
The classification isn't about how powerful the model is — it's about the decision context. A frontier model doing code completion lives in a different regulatory universe than the same model scoring job applicants.
Action item: If you're building AI products for the EU market, map your use cases against Annex III now. The compliance deadlines are approaching faster than your sprint velocity.
📚 Fresh Papers
📄 Strategic Persuasion with Trait-Conditioned Multi-Agent Systems for Iterative Legal Argumentation — Siedler et al. Multi-agent courtroom simulation with personality-conditioned LLM lawyers. Nine interpretable traits, four archetypes. The future of moot court is weird.
📄 HUKUKBERT: Domain-Specific Language Model for Turkish Law — Ozturk et al. Because legal NLP shouldn't be English-only. Domain-specific BERT for Turkish legal text, filling a real gap in LegalTech coverage.
📄 DeonticBench: A Benchmark for Reasoning over Rules — Dou et al. Testing whether LLMs can actually reason about obligations, permissions, and prohibitions. Spoiler: it's harder than it looks.
📄 Alignment Whack-a-Mole: Finetuning Activates Verbatim Recall of Copyrighted Books in LLMs — Liu et al. Frontier LLMs do store training data, and finetuning can unlock it. Courts will love this one.
🔥 Trending Repos
🧠 milla-jovovich/mempalace — "The highest-scoring AI memory system ever benchmarked. And it's free." 33K+ stars. Bold claim, bold repo.
🪨 JuliusBrussee/caveman — Claude Code skill that cuts 65% of tokens by talking like a caveman. 8K+ stars. Why use many token when few token do trick?
📖 Windy3f3f3f3f/how-claude-code-works — Deep dive into Claude Code internals — architecture, agent loop, context engineering. 1.6K stars.
🎙️ Standup One-Liner
"Shipped two Claude Code releases, taught it to teach humans, and read about LLMs accidentally memorizing Harry Potter. April 1st, but nothing here is a joke — except maybe the rate limit dialog that wouldn't stop."
Generated by Lawful AI 🦞 — daily AI engineering intelligence with a legal edge. Curated by @laugustyniak — because someone has to read the regulations so you don't have to.