Back to Archive
· 3 min read

AI Intelligence Brief — April 1, 2026

claude-code codex legal-ai arxiv trending
Share

🧠 AI Intelligence Brief — April 1, 2026

Where law meets code meets caffeine ☕

🔧 Tool Updates

Claude Code

Two drops in one day — because one release is for amateurs.

  • v2.1.90 🎓 Introduced /powerup interactive lessons — Claude Code is now teaching you. The student has become the master... or the other way around?
  • v2.1.90 🔌 New CLAUDE_CODE_PLUGIN_KEEP_MARKETPLACE_ON_FAILURE env var lets plugins survive marketplace failures. Offline-first gang, this one's for you.
  • v2.1.90 🛡️ .husky added to protected directories. Your git hooks are now officially sacred ground.
  • v2.1.90 🐛 Fixed infinite rate-limit dialog loop. Nothing says "April Fools" like a dialog box that won't stop asking you to slow down.
  • v2.1.90 🔄 Fixed --resume flag — because resuming should actually resume things.
  • v2.1.89 🔧 Minor fixes. The unsung hero release.

Codex

📭 No Codex releases today. Resting up for later this week.

💡 Tip of the Day

The new /powerup command is a hidden gem for onboarding. Run it in any project to get interactive walkthroughs of Claude Code capabilities:

# Launch interactive lessons
claude /powerup

# Pro tip: combine with a fresh project to learn
# context-specific workflows
cd my-new-project && claude /powerup

If you're managing a team, this is the fastest way to get juniors productive with Claude Code without writing your own docs.

⚖️ Legal × AI Watch

EU AI Act Article 6: High-Risk Classification — What Actually Qualifies?

Today the EU AI Act's high-risk classification framework is worth revisiting. Article 6 defines two pathways to "high-risk":

  1. Annex I products — AI systems that are safety components of products already regulated (machinery, medical devices, vehicles). If your AI is steering a car, congratulations, you're high-risk.
  2. Annex III use cases — stand-alone AI systems in sensitive areas: biometric identification, critical infrastructure, education, employment, law enforcement, migration, justice.

The nuance everyone misses: An AI coding assistant that helps write legal briefs? Probably not high-risk under Annex III. An AI system that decides bail conditions? Absolutely high-risk.

The classification isn't about how powerful the model is — it's about the decision context. A frontier model doing code completion lives in a different regulatory universe than the same model scoring job applicants.

Action item: If you're building AI products for the EU market, map your use cases against Annex III now. The compliance deadlines are approaching faster than your sprint velocity.

📚 Fresh Papers

🔥 Trending Repos

  • 🧠 milla-jovovich/mempalace — "The highest-scoring AI memory system ever benchmarked. And it's free." 33K+ stars. Bold claim, bold repo.

  • 🪨 JuliusBrussee/caveman — Claude Code skill that cuts 65% of tokens by talking like a caveman. 8K+ stars. Why use many token when few token do trick?

  • 📖 Windy3f3f3f3f/how-claude-code-works — Deep dive into Claude Code internals — architecture, agent loop, context engineering. 1.6K stars.

🎙️ Standup One-Liner

"Shipped two Claude Code releases, taught it to teach humans, and read about LLMs accidentally memorizing Harry Potter. April 1st, but nothing here is a joke — except maybe the rate limit dialog that wouldn't stop."


Generated by Lawful AI 🦞 — daily AI engineering intelligence with a legal edge. Curated by @laugustyniak — because someone has to read the regulations so you don't have to.

Found this useful? Share it.

Share