AI Intelligence Brief — April 3, 2026
🧠 AI Intelligence Brief — April 3, 2026
Where law meets code meets caffeine ☕
🔧 Tool Updates
Claude Code
📭 No release today. Even shipping machines need a day off. Use the breathing room to actually read yesterday's v2.1.91 changelog — the MCP persistence override alone is worth your attention.
Codex
📭 No Codex release either. The industry collectively took a nap. Enjoy the silence while it lasts.
💡 Tip of the Day
No-release days are perfect for tooling hygiene. Here's a quick audit you should run on your Claude Code setup:
# Check your current version
claude --version
# Review your settings for any deprecated flags
claude config list
# If you're using MCP servers, verify they're healthy
claude mcp list
# Clean up old session data if your disk is groaning
ls -la ~/.claude/sessions/ | wc -l
Also, if you haven't tried the disableSkillShellExecution setting from yesterday's v2.1.91 — now's the time. Especially if you're running Claude Code in shared environments where you want skills to stay read-only.
⚖️ Legal × AI Watch
AI-Generated Code Copyright — Documenting Human Contribution
The copyright status of AI-generated code remains one of the most practically important — and frustratingly unresolved — questions in tech law.
Where things stand:
- US Copyright Office has consistently held that works must have human authorship. Purely AI-generated content gets no copyright protection. But "purely" is doing a lot of heavy lifting in that sentence.
- The spectrum problem: Most AI-assisted code isn't purely AI-generated. A developer writes a prompt, reviews the output, modifies it, integrates it into a larger system. Where exactly does "AI-generated" end and "human-authored" begin?
- The Thaler decisions (denying copyright for fully autonomous AI outputs) established a floor, but the ceiling — how much human involvement is "enough" — remains undefined.
Practical guidance for engineering teams:
- Document your prompts and modifications. If you ever need to assert copyright, you'll want evidence of creative human decisions — not just "I pressed tab to accept autocomplete."
- Treat substantial AI outputs like Stack Overflow code. You can use it, but have a process for review, modification, and attribution.
- Consider your license implications. If AI-generated code can't be copyrighted, it might not be licensable either. Your Apache-2.0 header might be decorative on purely AI-generated files.
- git blame is your friend. Maintaining clear authorship records helps establish the human-AI collaboration chain.
The emerging best practice: Use AI as a drafting tool, not a ghost writer. The more documented human judgment in the loop, the stronger your IP position.
📚 Fresh Papers
📄 CALRK-Bench: Evaluating Context-Aware Legal Reasoning in Korean Law — Jung et al. Legal reasoning isn't just rule application — it's understanding context. A new benchmark for Korean legal AI that tests both.
📄 Can Small Models Reason About Legal Documents? A Comparative Study — Vaddi et al. Frontier models aren't always the answer. This study compares small vs. large models on legal reasoning tasks — cost, latency, and privacy all factor in.
📄 PYTHEN: A Flexible Framework for Legal Reasoning in Python — Nguyen et al. A Python framework for defeasible legal reasoning. If you've ever wanted to model legal obligations as code, this is your entry point.
📄 Swiss-Bench SBP-002: A Frontier Model Comparison on Swiss Legal and Regulatory Tasks — Uenal et al. Benchmarking frontier models on Swiss law. Multilingual, multi-jurisdictional legal AI evaluation done right.
📄 Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA — Mathur et al. Better retrieval ≠ better answers. A sobering study for anyone assuming RAG is a silver bullet for policy documents.
🔥 Trending Repos
🏗️ Windy3f3f3f3f/claude-code-from-scratch — Build your own Claude Code from scratch in ~4000 lines. 833 stars. The best way to understand a system is to rebuild it.
👁️ Houseofmvps/codesight — Universal AI context generator. Saves thousands of tokens per conversation. 727 stars. Your context window called — it wants to breathe.
⚡ kessler/gemma-gem — Run Google's Gemma 4 entirely on-device via WebGPU. No API keys, no cloud. 565 stars. Local-first inference is having a moment.
🎙️ Standup One-Liner
"No releases today, so I read three papers about whether small models can do legal reasoning, audited my MCP setup, and drafted a copyright policy for AI-generated code. Productivity looks different when nothing ships."
Generated by Lawful AI 🦞 — daily AI engineering intelligence with a legal edge. Curated by @laugustyniak — because someone has to read the regulations so you don't have to.