Back to Archive
· 4 min read

AI Intelligence Brief — April 2, 2026

claude-code codex legal-ai arxiv trending
Share

🧠 AI Intelligence Brief — April 2, 2026

Where law meets code meets caffeine ☕

🔧 Tool Updates

Claude Code

v2.1.91 dropped and it's a banger for plugin developers and enterprise teams.

  • 🗄️ MCP tool result persistence override — You can now persist up to 500K of tool results via _meta. For those of you running complex MCP workflows that generate massive outputs, your context window just breathed a sigh of relief.
  • 🔒 disableSkillShellExecution setting — Enterprise admins can now prevent skills from executing shell commands. Because sometimes "with great power comes great compliance requirements."
  • 🔗 Multi-line prompts in deep links — Deep links can now carry multi-line prompts. Your workflow automation just got a lot more expressive.
  • 📦 Plugins can ship executables — Plugin authors can now bundle native binaries. This opens the door to high-performance plugin extensions without the Node.js overhead.

Codex

📭 Still quiet on the Codex front. The calm before the alpha storm.

💡 Tip of the Day

The new MCP tool result persistence is a game-changer for long-running workflows. Here's how to use the _meta override to persist large tool outputs:

{
  "result": {
    "_meta": {
      "persist": true,
      "maxSize": 500000
    },
    "data": "...your large tool output here..."
  }
}

This is especially useful for code analysis tools that return full ASTs, search results that span multiple files, or data pipeline outputs. Without this, large results would get truncated in the conversation context.

⚖️ Legal × AI Watch

GDPR and AI Training Data — The Right to Erasure vs. Model Weights

Here's the question that keeps AI lawyers awake at night: Can you "delete" someone's data from a trained model?

Under GDPR Article 17, individuals have the right to erasure ("right to be forgotten"). But once personal data is baked into model weights through training, extraction is somewhere between "extremely difficult" and "currently impossible."

The emerging positions:

  • Data protection authorities are increasingly treating model weights as derived data. If your training data included personal information, the model itself might be considered to contain that data.
  • Technical reality says you can't selectively un-learn specific data points without retraining (or using emerging machine unlearning techniques that are still experimental).
  • The compromise most organizations are landing on: robust data filtering before training, clear documentation of training data sources, and contractual provisions for handling erasure requests.

The Italian DPA's approach (post-ChatGPT ban resolution) set a precedent: if you can demonstrate that erasure from the model is technically infeasible, you must at minimum prevent the model from outputting that specific data.

For engineers: This isn't just a legal team problem. Your data pipelines, training logs, and model cards are now compliance artifacts. Document everything.

📚 Fresh Papers

🔥 Trending Repos

🎙️ Standup One-Liner

"MCP tools can now persist half a megabyte of results, plugins ship native binaries, and the GDPR says you can't un-bake cookies from a model. Wednesday is technically Thursday's Monday, and I refuse to elaborate."


Generated by Lawful AI 🦞 — daily AI engineering intelligence with a legal edge. Curated by @laugustyniak — because someone has to read the regulations so you don't have to.

Found this useful? Share it.

Share