{
  "title": "Lawful AI",
  "description": "Daily AI intelligence with a legal edge",
  "link": "https://laugustyniak.github.io/lawful-ai",
  "items": [
    {
      "date": "2026-05-06",
      "title": "Lawful AI Daily Brief — 2026-05-06",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.129** landed with `--plugin-url` support (ZIP from URL), plus useful env toggles like `CLAUDE_CODE_FORCE_SYNC_OUTPUT=1` and `CLAUDE_CODE_ENABLE_GATEWAY_MODEL_DISCOVERY=1`.\n- **Codex** kept sprinting with `0.129.0-alpha.7/.8` and a lot of MCP hardening and confi",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.129** landed with `--plugin-url` support (ZIP from URL), plus useful env toggles like `CLAUDE_CODE_FORCE_SYNC_OUTPUT=1` and `CLAUDE_CODE_ENABLE_GATEWAY_MODEL_DISCOVERY=1`.\n- **Codex** kept sprinting with `0.129.0-alpha.7/.8` and a lot of MCP hardening and config reliability work.\n- Translation: less glitter, more “this won’t break at 2am.” 😌\n\n## 💡 Tip of the Day\nIf your legal-agent stack needs better auditability, start with OTEL today:\n\n```bash\nexport OTEL_EXPORTER_OTLP_ENDPOINT=http://otel.local:4318\nexport OTEL_SERVICE_NAME=lawful-ai-agent\n```\n\nThen log model/provider/tool-call metadata per run so you can answer compliance questions with receipts, not vibes.\n\n## ⚖️ Legal x AI Watch\n- Active legal/compliance repo movement today includes:\n  - [`kody32/eu-ai-act-guide`](https://github.com/kody32/eu-ai-act-guide) — interactive EU AI Act + GDPR guide for startups/SMEs.\n  - [`aksiom-dev/eu-ai-act-evals`](https://github.com/aksiom-dev/eu-ai-act-evals) — eval prompts mapped to EU AI Act concerns.\n  - [`aevum-labs/aevum`](https://github.com/aevum-labs/aevum) — policy engine + audit trail + EU AI Act angle.\n\n## 📚 Fresh Papers\n- **Safety and accuracy follow different scaling laws in clinical large language models**  \n  https://arxiv.org/abs/2605.04039v1\n- **OpenSeeker-v2: Pushing the Limits of Search Agents with Informative and High-Difficulty Trajectories**  \n  https://arxiv.org/abs/2605.04036v1\n- **EQUITRIAGE: A Fairness Audit of Gender Bias in LLM-Based Emergency Department Triage**  \n  https://arxiv.org/abs/2605.03998v1\n- **Logical Consistency as a Bridge: Improving LLM Hallucination Detection via Label Constraint Modeling**  \n  https://arxiv.org/abs/2605.03971v1\n- **Atomic Fact-Checking Increases Clinician Trust in LLM Recommendations for Oncology Decision Support**  \n  https://arxiv.org/abs/2605.03916v1\n\n## 🔥 Trending Repos\n- [`Significant-Gravitas/AutoGPT`](https://github.com/Significant-Gravitas/AutoGPT) ⭐183k\n- [`langgenius/dify`](https://github.com/langgenius/dify) ⭐140k\n- [`NousResearch/hermes-agent`](https://github.com/NousResearch/hermes-agent) ⭐132k\n- [`mudler/LocalAI`](https://github.com/mudler/LocalAI) ⭐46k\n- [`langchain-ai/langgraph`](https://github.com/langchain-ai/langgraph) ⭐31k\n\n## 🎤 Standup One-Liner\n“Today we tightened our agent stack around MCP reliability and observability, while tracking fresh EU-AI-Act-aligned repos so compliance stays a feature, not a fire drill.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/06"
    },
    {
      "date": "2026-05-05",
      "title": "Lawful AI Daily Brief — 2026-05-05",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "⚙️ Tool Updates\n- **Claude Code v2.1.128** rolled in with cleaner `/mcp` visibility (including zero-tool server detection), better reconnect signal/noise, and `.zip` plugin-dir support.\n- **Workflow sanity boost:** `EnterWorktree` now branches from local `HEAD`, and parallel read-only shell calls d",
      "content": "## ⚙️ Tool Updates\n- **Claude Code v2.1.128** rolled in with cleaner `/mcp` visibility (including zero-tool server detection), better reconnect signal/noise, and `.zip` plugin-dir support.\n- **Workflow sanity boost:** `EnterWorktree` now branches from local `HEAD`, and parallel read-only shell calls don’t all die when one sibling fails.\n- **Codex CLI** is still sprinting through alpha drops (`0.129.0-alpha.4 → .6`) with heavy packaging modularization momentum.\n\n## 💡 Tip of the Day\nTreat compliance evidence like build artifacts, not afterthoughts.\n\n```bash\n# Quick daily evidence capture pattern\ncodex --version\nclaude /mcp\nprintf \"%s | model=%s | tool_snapshot=ok\\n\" \"$(date -Iseconds)\" \"$(codex --version)\" >> .ai-audit/evidence.log\n```\n\n## ⚖️ Legal x AI Watch\n- Legal/compliance-flavored repos with fresh activity:\n  - `CSOAI-ORG/care-membrane-mcp`\n  - `CSOAI-ORG/healthcare-fhir-mcp`\n  - `StrangeDaysTech/devtrail`\n  - `airblackbox/air-trust`\n  - `airblackbox/airblackbox`\n- Signal: teams keep pushing **EU AI Act + auditability + MCP** patterns into practical tooling.\n\n## 📚 Fresh Papers\n- **Accurate Legal Reasoning at Scale: Neuro-Symbolic Offloading and Structural Auditability for Robust Legal Adjudication** — http://arxiv.org/abs/2605.02472v1\n- **SCPRM: A Schema-aware Cumulative Process Reward Model for Knowledge Graph Question Answering** — http://arxiv.org/abs/2605.02819v1\n- **InfoLaw: Information Scaling Laws for Large Language Models with Quality-Weighted Mixture Data and Repetition** — http://arxiv.org/abs/2605.02364v1\n\n## 🔥 Trending Repos\n- `Significant-Gravitas/AutoGPT`\n- `langgenius/dify`\n- `NousResearch/hermes-agent`\n- `bytedance/deer-flow`\n- `openai/openai-agents-python`\n\n## 🎤 Standup One-Liner\n“Yesterday we tightened agent workflows and doubled down on audit-grade evidence trails, so velocity went up while compliance risk went down — rare win-win unlocked 😎.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/05"
    },
    {
      "date": "2026-05-04",
      "title": "Lawful AI Daily Brief — 2026-05-04",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Codex CLI 0.128.0** keeps getting better: persistent `/goal`, `codex update`, stronger permission profiles, and cleaner plugin lifecycle.\n- **Claude Code v2.1.126** remains the practical winner for regulated workflows: project purge controls, better model discovery, and improve",
      "content": "## 🛠️ Tool Updates\n- **Codex CLI 0.128.0** keeps getting better: persistent `/goal`, `codex update`, stronger permission profiles, and cleaner plugin lifecycle.\n- **Claude Code v2.1.126** remains the practical winner for regulated workflows: project purge controls, better model discovery, and improved skill telemetry.\n- **Community pulse:** `awesome-claude-code` looked mostly like docs/structure maintenance, not a fresh wave of new skills.\n\n## 💡 Tip of the Day\nTreat your agent permission profile like legal policy, not just dev config:\n\n```yaml\nprofiles:\n  strict:\n    fs: read-only\n    net: allowlist\n    net_hosts: [api.openai.com, github.com]\n    approvals: on-request\n```\n\n## ⚖️ Legal x AI Watch\n- Compliance-focused repos still moving fast:\n  - `arqajalvarez/aibom-scanner`\n  - `Jozithe3019/ai-act-guardian`\n  - `Kabbalistgenusmasdevallia941/EUfirst`\n- Pattern worth noting: more teams are baking EU AI Act mapping into scanner/tooling layers instead of handling it in docs after the fact.\n\n## 📈 Trending Repos\n- `f/prompts.chat` — giant prompt hub that still dominates stars.\n- `MemPalace/mempalace` — open-source AI memory system with strong momentum.\n- `langchain-ai/langgraph` — agent graph workflows keep pulling attention.\n- `onyx-dot-app/onyx` — open AI chat platform climbing steadily.\n- `mastra-ai/mastra` and `yamadashy/repomix` — builder tools with strong practical adoption.\n\n## 🎤 Standup One-Liner\n“We tightened agent permissions and compliance traceability while keeping delivery speed high—so we’re shipping faster *and* making auditors less scary.”\n\n---\n🔗 Repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/04"
    },
    {
      "date": "2026-05-03",
      "title": "Lawful AI Daily Brief — 2026-05-03",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Codex CLI 0.128.0** is bringing operator energy: persisted `/goal` flows, `codex update`, stronger permission profiles, and cleaner plugin lifecycle.\n- **Claude Code v2.1.126** (just outside 48h, still hot) added gateway model discovery (`/model`), `claude project purge`, and b",
      "content": "## 🛠️ Tool Updates\n- **Codex CLI 0.128.0** is bringing operator energy: persisted `/goal` flows, `codex update`, stronger permission profiles, and cleaner plugin lifecycle.\n- **Claude Code v2.1.126** (just outside 48h, still hot) added gateway model discovery (`/model`), `claude project purge`, and better OAuth fallback in WSL/SSH/container setups.\n- **Trend vibe:** fewer “where did my tool go?” moments, more “it just shipped.”\n\n## 💡 Tip of the Day\nMake compliance checks a merge blocker, not a retro meeting:\n\n```yaml\n# .github/workflows/ai-compliance.yml\nname: ai-compliance\non: [pull_request]\njobs:\n  check:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - uses: Varshith-07/eu-ai-act-check-action@main\n```\n\n## ⚖️ Legal x AI Watch\n- Legal/compliance-flavored repos updated in the last day include:\n  - [`aevum-labs/aevum`](https://github.com/aevum-labs/aevum) — policy engine + audit trail + EU AI Act tags.\n  - [`Unawakened-landlord758/ClawGuard`](https://github.com/Unawakened-landlord758/ClawGuard) — guardrails for agent actions, leak prevention, and audit logs.\n  - [`ivanmoralesf2015-sudo/enterprise-governance-toolkit`](https://github.com/ivanmoralesf2015-sudo/enterprise-governance-toolkit) — governance/risk toolkit with EU AI Act and ISO 42001 themes.\n- Signal: governance-by-default tooling is moving from “nice idea” to “actual repo with commits.”\n\n## 📈 Trending Repos\n- [`Significant-Gravitas/AutoGPT`](https://github.com/Significant-Gravitas/AutoGPT) — still a giant in agent land.\n- [`langgenius/dify`](https://github.com/langgenius/dify) — strong momentum for production agent workflows.\n- [`firecrawl/firecrawl`](https://github.com/firecrawl/firecrawl) — web data layer remains a core AI stack primitive.\n- [`NousResearch/hermes-agent`](https://github.com/NousResearch/hermes-agent) — fast-moving agent framework with huge traction.\n\n## 🎤 Standup One-Liner\n“Yesterday we tightened agent workflows and moved compliance checks into CI, so we can ship faster *and* leave an audit trail legal won’t hate.”\n\n---\nRepo: [github.com/laugustyniak/lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/03"
    },
    {
      "date": "2026-05-02",
      "title": "Lawful AI Daily Brief — 2026-05-02",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.126** added gateway-backed `/model` discovery, `claude project purge --dry-run`, richer skill telemetry, and tighter MCP reliability.\n- **Codex CLI v0.128.0** shipped persistent `/goal` flows, `codex update`, configurable keymaps, stronger permission profiles,",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.126** added gateway-backed `/model` discovery, `claude project purge --dry-run`, richer skill telemetry, and tighter MCP reliability.\n- **Codex CLI v0.128.0** shipped persistent `/goal` flows, `codex update`, configurable keymaps, stronger permission profiles, and improved plugin/hook lifecycle.\n- **Codex v0.129.0-alpha** is already rolling — release train is still in espresso mode. ☕\n\n## 💡 Tip of the Day\nTreat permissions like policy-as-code and enforce them in CI.\n\n```yaml\n# .github/workflows/lawful-ai-guard.yml\nsteps:\n  - run: codex update\n  - run: claude project purge . --dry-run\n  - run: ./scripts/check_ai_logging.sh\n  - run: ./scripts/check_human_override.sh\n```\n\n## ⚖️ Legal x AI Watch\n- Fresh legal/compliance-adjacent repo activity:\n  - [`Sobri01/ISO-TDLA-Framework`](https://github.com/Sobri01/ISO-TDLA-Framework) — technical definitions for AI regulation clarity.\n  - [`AthenaCore/AwesomeResponsibleAI`](https://github.com/AthenaCore/AwesomeResponsibleAI) — updated responsible AI regulation/resources list.\n  - [`yagnesh44/Legis`](https://github.com/yagnesh44/Legis) — full-stack AI compliance assistant for business regulations.\n- Compliance nudge: map AI Act risk controls to concrete tooling checks (logging, human override, traceability) before merge, not after incident review.\n\n## 📈 Trending Repos\n- **Top new repos:**\n  - `andrebaltieri/live-maf-2026-04-30` (⭐16, C#)\n  - `badlogic/gpt-2-ts` (⭐11, TypeScript)\n- **AI/LLM movers:**\n  - `Significant-Gravitas/AutoGPT`\n  - `f/prompts.chat`\n  - `open-webui/open-webui`\n  - `NousResearch/hermes-agent`\n  - `hiyouga/LlamaFactory`\n\n## 🎤 Standup One-Liner\n“Hardened our agent stack with explicit permission policy and audit-ready checks, so we can move fast *and* stay EU-AI-Act-friendly.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/02"
    },
    {
      "date": "2026-05-01",
      "title": "⚖️🤖 Lawful AI Daily Brief — 2026-05-01",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-tech"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code (v2.1.126):** `claude project purge` landed (`--dry-run`, `--all`, `--interactive`) for full project-state cleanup.\n- **Claude model routing:** `/model` now reads from gateway `/v1/models`.\n- **Observability bump:** `claude_code.skill_activated` now logs trigger sou",
      "content": "## 🛠️ Tool Updates\n- **Claude Code (v2.1.126):** `claude project purge` landed (`--dry-run`, `--all`, `--interactive`) for full project-state cleanup.\n- **Claude model routing:** `/model` now reads from gateway `/v1/models`.\n- **Observability bump:** `claude_code.skill_activated` now logs trigger source (`user-slash`, proactive, nested-skill).\n- **Codex CLI (0.128.0):** persistent `/goal` workflows (create/pause/resume/clear), `codex update`, richer keymaps/status controls.\n- **Permission posture:** `--full-auto` is being phased out in favor of explicit trust profiles. (Finally, governance with muscles 💪)\n\n## 💡 Tip of the Day\nUse risk-tiered execution profiles so legal workflows stay audit-friendly by default:\n\n```yaml\nprofiles:\n  legal_review:\n    fs: read-only\n    network: deny\n  drafting:\n    fs: workspace-write\n    network: allowlist\n    allow_domains: [eur-lex.europa.eu]\n```\n\n## ⚖️ Legal x AI Watch\n- Newly active legal/regulation-adjacent repos:\n  - [ReguNav/app](https://github.com/ReguNav/app)\n  - [nicholasraimbault/skytale](https://github.com/nicholasraimbault/skytale)\n  - [konjoai/squash](https://github.com/konjoai/squash)\n- Compliance angle: map agent permission profiles to legal risk tiers and log every escalation event.\n\n## 📚 Fresh Papers\n- **APPSI-139: A Parallel Corpus of English Application Privacy Policy Summarization and Interpretation** — dataset for clearer privacy-policy summarization and interpretation.\n- **Exploration Hacking: Can LLMs Learn to Resist RL Training?** — investigates strategic behavior during RL post-training.\n- **Latent Adversarial Detection** — activation-level probing for multi-turn attack detection.\n- **NeocorRAG** — evidence-chain RAG to reduce irrelevant retrieval and boost grounded recall.\n- **Iterative Multimodal RAG for Medical QA** — retrieval loop using multimodal evidence.\n\n## 🔥 Trending Repos\n- **AutoGPT** — ⭐183k — agent workflow platform.\n- **prompts.chat** — ⭐161k — giant prompt library.\n- **dify** — ⭐139k — production platform for agentic workflows.\n- **langchain** — ⭐135k — agent engineering stack.\n- **hermes-agent** — ⭐125k — personalizable agent framework.\n\n## 🎤 Standup One-Liner\nI tightened our AI stack with goal persistence + explicit trust profiles, and lined it up with compliance-friendly guardrails so speed doesn’t outrun auditability.\n\n---\nRepo: [lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/05/01"
    },
    {
      "date": "2026-04-30",
      "title": "Lawful AI Daily Brief — 2026-04-30",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-tech"
      ],
      "summary": "⚙️ Tool Updates\n- **Claude Code v2.1.123** fixed the OAuth 401 retry loop when experimental betas are disabled. Less auth drama, more shipping.\n- **Claude Code v2.1.122** added `ANTHROPIC_BEDROCK_SERVICE_TIER`, smarter `/resume` from pasted PR URLs, and cleaner `/mcp` behavior.\n- **Codex CLI** drop",
      "content": "## ⚙️ Tool Updates\n- **Claude Code v2.1.123** fixed the OAuth 401 retry loop when experimental betas are disabled. Less auth drama, more shipping.\n- **Claude Code v2.1.122** added `ANTHROPIC_BEDROCK_SERVICE_TIER`, smarter `/resume` from pasted PR URLs, and cleaner `/mcp` behavior.\n- **Codex CLI** dropped `0.126.0-alpha.15 → .17` quickly, with packaging that now looks like a mini AI runtime stack (`codex-app-server`, command runner, responses proxy).\n\n## 💡 Tip of the Day\nIf you want clean audit breadcrumbs for compliance reviews, hook every tool call into a tiny JSON log:\n\n```python\n# audit_log.py\nimport json,sys,time,hashlib\ne=json.load(sys.stdin)\nrow={\n \"ts\":int(time.time()),\n \"tool\":e.get(\"tool_name\"),\n \"prompt_hash\":hashlib.sha256((e.get(\"input\",\"\")+\"\").encode()).hexdigest()[:16],\n \"output_id\":e.get(\"tool_use_id\")\n}\nprint(json.dumps(row))\n```\n\n## ⚖️ Legal x AI Watch\n- `artvana/global-ai-atlas` was updated — structured, machine-readable global AI regulation tracking.\n- `kody32/eu-ai-act-guide` was updated — practical EU AI Act + GDPR guidance for startups/SMEs.\n- `Alvoradozerouno/GENESIS-v10.1` was updated — EU banking compliance framing with AI Act links.\n- `simaba/ai-prism` was updated — curated responsible-AI governance resources.\n\n## 📚 Fresh Papers\n- **Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models**  \n  https://arxiv.org/abs/2604.26951\n- **ClassEval-Pro: A Cross-Domain Benchmark for Class-Level Code Generation**  \n  https://arxiv.org/abs/2604.26923\n- **ClawGym: A Scalable Framework for Building Effective Claw Agents**  \n  https://arxiv.org/abs/2604.26904\n\n## 🔥 Trending Repos\n- `f/prompts.chat` — ⭐ 161k — still the prompt bazaar boss.\n- `langgenius/dify` — ⭐ 139k — agentic workflow platform keeps climbing.\n- `langchain-ai/langchain` — ⭐ 135k — no signs of slowing down.\n- `NousResearch/hermes-agent` — ⭐ 123k — agent framework momentum stays strong.\n- `bytedance/deer-flow` — ⭐ 64k — long-horizon super-agent harness keeps trending.\n\n## 🎤 Standup One-Liner\n“Today we tightened our Claude/Codex stack and turned compliance logging from ‘nice idea’ into copy-paste reality.”\n\n---\nSource repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/30"
    },
    {
      "date": "2026-04-29",
      "title": "Lawful AI Daily Brief — 2026-04-29",
      "tags": [
        "lawful-ai",
        "ai-engineering",
        "legal-tech",
        "daily-brief"
      ],
      "summary": "🛠️ Tool Updates\n- Claude Code kept polishing operator life: OAuth retry loop fix in `v2.1.123`, better MCP duplicate handling, and `alwaysLoad` for MCP servers.\n- Codex CLI is on espresso shots again: `0.126.0-alpha.9 → .13` with heavier MCP/subagent import workflows and tighter provider/tool gati",
      "content": "## 🛠️ Tool Updates\n- Claude Code kept polishing operator life: OAuth retry loop fix in `v2.1.123`, better MCP duplicate handling, and `alwaysLoad` for MCP servers.\n- Codex CLI is on espresso shots again: `0.126.0-alpha.9 → .13` with heavier MCP/subagent import workflows and tighter provider/tool gating.\n- Practical takeaway: orchestration is getting stronger *and* stricter — great for compliance-minded automation.\n\n## 💡 Tip of the Day\nIf your legal workflow keeps “forgetting” tools, force-load your legal MCP server and stop context roulette 🎯\n\n```json\n{\n  \"mcpServers\": {\n    \"legaldb\": {\n      \"url\": \"http://localhost:8787\",\n      \"alwaysLoad\": true\n    }\n  }\n}\n```\n\n## ⚖️ Legal x AI Watch\n- Fresh regulation-focused repo activity:\n  - [`artvana/global-ai-atlas`](https://github.com/artvana/global-ai-atlas) — machine-readable global AI legislation atlas.\n  - [`ThatLadyKatieGovernance/Ai-governance-analyses`](https://github.com/ThatLadyKatieGovernance/Ai-governance-analyses) — governance framework breakdowns and strategy notes.\n  - [`Alvoradozerouno/GENESIS-v10.1`](https://github.com/Alvoradozerouno/GENESIS-v10.1) — sovereign-AI compliance framing (EU AI Act + financial regs).\n- Fast compliance win: log model + tool + retrieved sources per legal answer so audits don’t become archaeological digs later.\n\n## 📚 Fresh Papers\n- **Navigating Global AI Regulation: A Multi-Jurisdictional Retrieval-Augmented Generation System**  \n  https://arxiv.org/abs/2604.25448v1\n- **CORAL: Adaptive Retrieval Loop for Culturally-Aligned Multilingual RAG**  \n  https://arxiv.org/abs/2604.25676v1\n- **Carbon-Taxed Transformers: A Green Compression Pipeline for Overgrown Language Models**  \n  https://arxiv.org/abs/2604.25903v1\n\n## 📈 Trending Repos\n- `Significant-Gravitas/AutoGPT` ⭐ 183k — still the big autonomous-agent gravity well.\n- `langgenius/dify` ⭐ 139k — production agent workflow platform momentum continues.\n- `langchain-ai/langgraph` ⭐ 30k — graph-style resilient agent architecture keeps climbing.\n\n## 🎤 Standup One-Liner\n“Yesterday we tightened our legal AI stack so it’s faster in practice and much easier to defend in a compliance review.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/29"
    },
    {
      "date": "2026-04-28",
      "title": "Lawful AI Daily Brief — 2026-04-28",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.121** added juicy operator upgrades: `alwaysLoad` for MCP servers, plugin pruning, and global `PostToolUse` output rewriting.\n- **Codex CLI** keeps sprinting through `0.126.0-alpha.6 → .8` with better permission/profile plumbing and smoother active-turn UX.\n- *",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.121** added juicy operator upgrades: `alwaysLoad` for MCP servers, plugin pruning, and global `PostToolUse` output rewriting.\n- **Codex CLI** keeps sprinting through `0.126.0-alpha.6 → .8` with better permission/profile plumbing and smoother active-turn UX.\n- **Practical takeaway:** we’re entering “agent platform hardening” season — fewer magic demos, more production stamina.\n\n## 💡 Tip of the Day\nIf you want legal MCP tools always ready (no lazy wake-up lag), pin them as always-on:\n\n```json\n{\"mcpServers\":{\"legal\":{\"command\":\"python server.py\",\"alwaysLoad\":true}}}\n```\n\n## ⚖️ Legal x AI Watch\n- New/active governance-heavy repos popped up:\n  - [`dcp-ai-protocol/agno-dcp`](https://github.com/dcp-ai-protocol/agno-dcp) — cryptographic governance + EU AI Act mapping.\n  - [`abhaykshir/aigov`](https://github.com/abhaykshir/aigov) — CLI for AI system risk discovery and governance views.\n  - [`anjumhassan1927-cpu/indian-ai-regulation-tracker`](https://github.com/anjumhassan1927-cpu/indian-ai-regulation-tracker) — cross-jurisdiction AI regulation tracker (EU included).\n- Compliance nudge: log *every* agent decision with immutable IDs + output hashes; future audit-you will sleep better 😴\n\n## 📚 Fresh Papers\n- **Towards Lawful Autonomous Driving** — derives scenario-aware driving requirements from traffic laws/regulations.  \n  https://arxiv.org/abs/2604.24562\n- **Long-Context Aware Upcycling** — hybrid long-context scaling without full retraining from scratch.  \n  https://arxiv.org/abs/2604.24715\n- **XGRAG** — explainability framework for KG-based RAG pipelines.  \n  https://arxiv.org/abs/2604.24623\n\n## 🔥 Trending Repos\n- **New repo momentum:** `evanklem/evanflow` (⭐80) and `ultraworkers/hbackup` (⭐11).\n- **AI/LLM heavy hitters still climbing:** `AutoGPT`, `dify`, `hermes-agent`, `deer-flow`, `mempalace`.\n\n## 🎤 Standup One-Liner\n“Hardened our agent stack for compliance-grade traceability while keeping delivery speed high — less chaos, more receipts.”\n\n---\nRepo: [lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/28"
    },
    {
      "date": "2026-04-27",
      "title": "Lawful AI Daily Brief — 2026-04-27",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code**: no fresh drop in the latest window, but v2.1.119 is still paying rent — `PostToolUse`/`PostToolUseFailure` now expose `duration_ms`, MCP reconnects run in parallel, and `/config` sticks to `~/.claude/settings.json`.\n- **Codex CLI** is in espresso mode ☕: `0.126.0",
      "content": "## 🛠️ Tool Updates\n- **Claude Code**: no fresh drop in the latest window, but v2.1.119 is still paying rent — `PostToolUse`/`PostToolUseFailure` now expose `duration_ms`, MCP reconnects run in parallel, and `/config` sticks to `~/.claude/settings.json`.\n- **Codex CLI** is in espresso mode ☕: `0.126.0-alpha.2 → .3 → .4` landed quickly, with stronger multi-component artifacts (`codex-app-server`, `codex-responses-api-proxy`, `codex-command-runner`) plus signing metadata.\n- **Community pulse:** `awesome-claude-code` remained mostly auto-maintenance in the last commit slice (ticker/SVG refreshes, no major new skill waves).\n\n## 💡 Tip of the Day\nIf you want compliance evidence without manual pain, log tool latency + identity on every run:\n\n```bash\njq -nc --arg tool \"$TOOL_NAME\" --argjson ms \"$DURATION_MS\" '{ts:now,tool:$tool,duration_ms:$ms,model:env.MODEL,output_sha:env.OUTPUT_SHA}' >> .ai-audit/tool-events.jsonl\n```\n\nTiny hook, big audit trail. Future-you (and legal) will send thanks. 🧾\n\n## ⚖️ Legal x AI Watch\n- Compliance repo activity is still hot around **EU AI Act implementation tooling**:\n  - [`CSOAI-ORG/meok-watermark-attest-mcp`](https://github.com/CSOAI-ORG/meok-watermark-attest-mcp) — Article 50-style watermarking/provenance + disclosure workflows.\n  - [`CSOAI-ORG/meok-cra-annex-iv-classifier-mcp`](https://github.com/CSOAI-ORG/meok-cra-annex-iv-classifier-mcp) — CRA Annex IV security-requirement classification patterns.\n  - [`CSOAI-ORG/meok-omnibus-tracker-mcp`](https://github.com/CSOAI-ORG/meok-omnibus-tracker-mcp) — timeline tracking across AI Act + GDPR + DORA.\n- Practical takeaway: teams are shifting from “policy docs” to **machine-verifiable compliance plumbing** (provenance, attestations, and signed evidence).\n\n## 🔥 Trending Repos\n- **New repo with early traction:**\n  - `NyxTides/ppt-image-first` — Python — ⭐ 15\n- **AI/LLM repos still soaking up stars + activity:**\n  - `f/prompts.chat`\n  - `langgenius/dify`\n  - `NousResearch/hermes-agent`\n  - `firecrawl/firecrawl`\n  - `bytedance/deer-flow`\n\n## 🧍 Standup One-Liner\nToday I tightened our AI engineering + compliance radar: same velocity, better receipts, fewer “trust me bro” moments. 😎\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/27"
    },
    {
      "date": "2026-04-26",
      "title": "Lawful AI Daily Brief — 2026-04-26",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code:** no brand-new drop in the last 48h, but recent updates still hit hard — MCP tool hooks (`mcp_tool`), cleaner `/usage`, and better parallel MCP reconnects.\n- **Codex CLI:** `0.125.0` + `0.126.0-alpha` line brought richer app-server plumbing and `reasoning_tokens` i",
      "content": "## 🛠️ Tool Updates\n- **Claude Code:** no brand-new drop in the last 48h, but recent updates still hit hard — MCP tool hooks (`mcp_tool`), cleaner `/usage`, and better parallel MCP reconnects.\n- **Codex CLI:** `0.125.0` + `0.126.0-alpha` line brought richer app-server plumbing and `reasoning_tokens` in JSON output for better cost/perf observability.\n- **Community pulse:** `awesome-claude-code` stayed quiet on net-new skills/tools (mostly automated ticker refreshes).\n\n## 💡 Tip of the Day\nTrack reasoning spend like a responsible chaos goblin:\n\n```bash\ncodex exec --json \"audit auth flow\" | jq '.usage.reasoning_tokens'\n```\n\nBonus: pipe this into a CSV audit log if you need EU-AI-Act-friendly breadcrumbs.\n\n## ⚖️ Legal x AI Watch\nFresh compliance-flavored repo activity in the last day:\n- [`eric-devismes/ai-register-eu`](https://github.com/eric-devismes/ai-register-eu) — enterprise AI systems compliance database for EU-oriented policy tracking.\n- [`StrangeDaysTech/devtrail`](https://github.com/StrangeDaysTech/devtrail) — governance + audit trails aligned with ISO 42001, with explicit EU AI Act/NIST references.\n- [`unterdacker/venshield`](https://github.com/unterdacker/venshield) — vendor risk dashboard with human-in-the-loop controls and EU AI Act compliance framing.\n\nPractical compliance move: log model ID, prompt hash, permission mode, and tool trace per run. Future-you (and auditors) will thank you.\n\n## 📈 Trending Repos\nCurrent AI/LLM movers from the latest trending snapshot:\n- [`Significant-Gravitas/AutoGPT`](https://github.com/Significant-Gravitas/AutoGPT)\n- [`f/prompts.chat`](https://github.com/f/prompts.chat)\n- [`langgenius/dify`](https://github.com/langgenius/dify)\n- [`NousResearch/hermes-agent`](https://github.com/NousResearch/hermes-agent)\n- [`bytedance/deer-flow`](https://github.com/bytedance/deer-flow)\n\n## 🎤 Standup One-Liner\n\"Yesterday I tightened Codex telemetry and permission profiling, so our AI workflows move faster *and* leave a cleaner compliance trail.\"\n\n---\n🔗 Repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/26"
    },
    {
      "date": "2026-04-25",
      "title": "Lawful AI Daily Brief — 2026-04-25",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.119/118** shipped sticky `/config`, direct MCP hook calls (`type: \"mcp_tool\"`), visual mode upgrades, and cleaner privacy boot controls.\n- **Codex CLI 0.125.0** brought stronger app-server plumbing (Unix sockets, better resume/fork flows, sticky envs, and clean",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.119/118** shipped sticky `/config`, direct MCP hook calls (`type: \"mcp_tool\"`), visual mode upgrades, and cleaner privacy boot controls.\n- **Codex CLI 0.125.0** brought stronger app-server plumbing (Unix sockets, better resume/fork flows, sticky envs, and cleaner permission-profile consistency).\n- Net effect: faster agent loops with fewer “why did this break in automation?” moments. ⚡\n\n## 💡 Tip of the Day\nWant speed **and** auditability? Start tracking reasoning-token burn in your normal run logs:\n\n```bash\ncodex exec --json \"analyze src/\" | jq '.usage.reasoning_tokens'\n```\n\n## ⚖️ Legal x AI Watch\n- Fresh legal-AI repo activity (last day):\n  - [`Jozithe3019/ai-act-guardian`](https://github.com/Jozithe3019/ai-act-guardian) — EU AI Act code-audit tooling with taint analysis.\n  - [`ra7701/aulite`](https://github.com/ra7701/aulite) — policy proxy for AI services with audit-first controls.\n  - [`Kabbalistgenusmasdevallia941/EUfirst`](https://github.com/Kabbalistgenusmasdevallia941/EUfirst) — EU-sovereign AI tools tracker.\n- Compliance move worth stealing: keep an append-only log per run (model/provider + tool calls + policy decision). Your future audit self will thank you.\n\n## 📈 Trending Repos\n**Top new repos (created since yesterday):**\n- `connectfarm1/accumulation-radar` ⭐ 47 — Python — crypto accumulation + OI anomaly detection.\n- `AI45Lab/Safactory` ⭐ 17 — Python — scalable factory for trustworthy autonomous agents.\n- `adamjramirez/sig-releases` ⭐ 16 — release tracker repo for AI-assisted team updates.\n\n**AI/LLM repos pushed recently:**\n- `Significant-Gravitas/AutoGPT` ⭐ 183k\n- `f/prompts.chat` ⭐ 160k\n- `langgenius/dify` ⭐ 138k\n- `NousResearch/hermes-agent` ⭐ 113k\n- `firecrawl/firecrawl` ⭐ 111k\n\n## 🗣️ Standup One-Liner\nWe tightened the AI toolchain so agents can move faster, and every serious action now leaves a compliance-friendly breadcrumb trail.\n\n---\n🔗 Repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/25"
    },
    {
      "date": "2026-04-24",
      "title": "Lawful AI Daily Brief — 2026-04-24",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "⚙️ Tool Updates\n- **Claude Code v2.1.119/118** tightened automation behavior: `--print` now respects tool allow/deny policies, `--agent` obeys permission mode, and hooks can call MCP tools directly.\n- **Codex CLI v0.124.0/0.123.0** stabilized hooks, improved MCP diagnostics (`/mcp verbose`), and fi",
      "content": "## ⚙️ Tool Updates\n- **Claude Code v2.1.119/118** tightened automation behavior: `--print` now respects tool allow/deny policies, `--agent` obeys permission mode, and hooks can call MCP tools directly.\n- **Codex CLI v0.124.0/0.123.0** stabilized hooks, improved MCP diagnostics (`/mcp verbose`), and fixed permission-state drift + queued-agent waiting.\n- Net effect: fewer “agent did a mystery thing” moments, more predictable runs with cleaner receipts. 🧾\n\n## 💡 Tip of the Day\nIf you want legal-grade auditability without slowing dev speed, add a pre-patch compliance guard:\n\n```toml\n# codex config.toml\n[hooks.pre_apply_patch]\ncommand = \"python scripts/legal_guard.py\"\n```\n\nTiny hook, huge compliance vibes. ✅\n\n## ⚖️ Legal x AI Watch\n- Freshly updated legal/compliance repos worth peeking at:\n  - [ai5labs/singleaxis-fabric](https://github.com/ai5labs/singleaxis-fabric) — audit-ready AI agent substrate with guardrails/telemetry.\n  - [vindicara-inc/projectair](https://github.com/vindicara-inc/projectair) — AI-agent incident response + signed forensic evidence exports.\n  - [ark-forge/mcp-eu-ai-act](https://github.com/ark-forge/mcp-eu-ai-act) — EU AI Act compliance scanner for codebases.\n- Compliance nudge: hash + log every tool call (input/output + timestamp + actor) so traceability isn’t a last-minute panic project.\n\n## 🧪 Fresh Papers\n- **Evaluation of Automatic Speech Recognition Using Generative Large Language Models** — semantic LLM judging for ASR beyond WER. (<https://arxiv.org/abs/2604.21928>)\n- **MathDuels: Evaluating LLMs as Problem Posers and Solvers** — models both create and solve problems in a dynamic benchmark. (<https://arxiv.org/abs/2604.21916>)\n- **From Research Question to Scientific Workflow: Leveraging Agentic AI for Science Automation** — agentic systems that convert questions into executable workflows. (<https://arxiv.org/abs/2604.21910>)\n\n## 📈 Trending Repos\n- **Significant-Gravitas/AutoGPT** — ⭐183k+ — autonomous-agent platform momentum remains absurdly high.\n- **langgenius/dify** — ⭐138k+ — still a top pick for production agent workflows.\n- **firecrawl/firecrawl** — ⭐111k+ — web data pipeline darling for AI agents.\n- **infiniflow/ragflow** — ⭐78k+ — RAG engine with agent capabilities continues climbing.\n\n## 🎤 Standup One-Liner\n“We hardened our agent stack for auditability and observability, so we can move faster *and* survive compliance review without dramatic keyboard sweating.”\n\n---\nRepo: <https://github.com/laugustyniak/lawful-ai-staging>",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/24"
    },
    {
      "date": "2026-04-23",
      "title": "Lawful AI Daily Brief — 2026-04-23",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code** kept shipping at “espresso x2” speed: MCP tools can now be called directly from hooks (`type: \"mcp_tool\"`), `/usage` now combines cost+stats, and `/resume` got smarter.\n- **Codex CLI v0.123.0** added Amazon Bedrock provider support, better `/mcp verbose` diagnosti",
      "content": "## 🛠️ Tool Updates\n- **Claude Code** kept shipping at “espresso x2” speed: MCP tools can now be called directly from hooks (`type: \"mcp_tool\"`), `/usage` now combines cost+stats, and `/resume` got smarter.\n- **Codex CLI v0.123.0** added Amazon Bedrock provider support, better `/mcp verbose` diagnostics, cleaner realtime handoffs, and fixed sticky “Working” states.\n- Community pulse: `awesome-claude-code` was mostly automation churn (repo ticker refreshes), not a big new-skills day.\n\n## 💡 Tip of the Day\nIf you’re wiring legal/compliance automation, make every subagent decision auditable by default:\n\n```json\n{\n  \"mcpServers\": {\n    \"legal-audit\": { \"command\": \"node\", \"args\": [\"audit-mcp.js\"] }\n  },\n  \"hooks\": [\n    {\n      \"event\": \"SubagentStop\",\n      \"tool\": { \"type\": \"mcp_tool\", \"name\": \"legal-audit.record\" }\n    }\n  ]\n}\n```\n\n## ⚖️ Legal x AI Watch\n- Fresh regulation-adjacent repos updated in the last 24h:\n  - [`Hasanjaafar/ai-academic-regulations-chatbot`](https://github.com/Hasanjaafar/ai-academic-regulations-chatbot) — RAG assistant over university regulations.\n  - [`dislovelhl/acgs-lite`](https://github.com/dislovelhl/acgs-lite) — governance/safety layer with EU AI Act framing.\n  - [`Jozithe3019/ai-act-guardian`](https://github.com/Jozithe3019/ai-act-guardian) — EU AI Act-oriented compliance auditing for Python projects.\n- Compliance nudge: keep prompt/response hashes + tool-call logs, so audits are boring (in the best possible way).\n\n## 📚 Fresh Papers\n- **Exploiting LLM-as-a-Judge Disposition on Free Text Legal QA via Prompt Optimization**  \n  http://arxiv.org/abs/2604.20726v1\n- **CHASM: Unveiling Covert Advertisements on Chinese Social Media**  \n  http://arxiv.org/abs/2604.20511v1\n- **Coverage, Not Averages: Semantic Stratification for Trustworthy Retrieval Evaluation**  \n  (new RAG eval framing from latest digest)\n\n## 📈 Trending Repos\n**Top new repos (latest snapshot):**\n- `Russell-cell/PPT-Design-Prompt` ⭐58\n- `SparkEngineAI/QuantClaw-plugin` ⭐49\n- `Yu9191/sub-store-workers` ⭐33\n\n**AI/LLM giants still sprinting:**\n- `Significant-Gravitas/AutoGPT` ⭐183k+\n- `langgenius/dify` ⭐138k+\n- `firecrawl/firecrawl` ⭐111k+\n\n## 🎤 Standup One-Liner\n“Yesterday we upgraded our agent stack for faster execution and stronger auditability—so we can ship quickly *and* sleep well during compliance reviews.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/23"
    },
    {
      "date": "2026-04-22",
      "title": "Lawful AI Daily Brief — 2026-04-22",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code** kept the safety gym streak alive: recent updates tightened permission handling, improved native launch paths, and added stronger network deny controls.\n- **Codex CLI** keeps shipping fast in alpha, with better MCP wiring, context-window visibility, and sturdier pl",
      "content": "## 🛠️ Tool Updates\n- **Claude Code** kept the safety gym streak alive: recent updates tightened permission handling, improved native launch paths, and added stronger network deny controls.\n- **Codex CLI** keeps shipping fast in alpha, with better MCP wiring, context-window visibility, and sturdier plugin lifecycle commands.\n- **Ecosystem signal:** mostly maintenance churn in community curation repos (quiet but healthy).\n\n## 💡 Tip of the Day\nWhen in doubt, make your AI stack auditable *by default*:\n\n```toml\n# policy-first defaults\n[sandbox.network]\ndeniedDomains = [\"*.pastebin.com\", \"*.anonfiles.com\"]\n\n[audit]\nlogToolInvocations = true\nlogInputHashes = true\n```\n\n## ⚖️ Legal x AI Watch\n- Compliance-oriented repos updated in the last day include:\n  - [SDL-HQ/sir-firewall](https://github.com/SDL-HQ/sir-firewall) — deterministic governance gate + offline-verifiable audits\n  - [jakejjoyner/ailedger](https://github.com/jakejjoyner/ailedger) — inference logging aimed at EU AI Act traceability\n  - [unterdacker/venshield](https://github.com/unterdacker/venshield) — vendor-risk platform with HITL + audit logging\n  - [lexbeam-software/eu-ai-governance-plugin](https://github.com/lexbeam-software/eu-ai-governance-plugin) — EU AI governance plugin tooling\n- **Practical angle:** teams are increasingly shipping “policy + proof” together (controls + logs), not as separate projects.\n\n## 📚 Fresh Papers\n- **GDPR Auto-Formalization with AI Agents and Human Verification** — proposes automatic formalization of GDPR provisions with human-in-the-loop verification.  \n  https://arxiv.org/abs/2604.14607v1\n- **BenGER: A Collaborative Web Platform for End-to-End Benchmarking of German Legal Tasks** — legal LLM benchmarking across task design, annotation, runs, and metrics.  \n  https://arxiv.org/abs/2604.13583v1\n- **From Anchors to Supervision: Memory-Graph Guided Corpus-Free Unlearning for LLMs** — unlearning method for removing sensitive/copyrighted memorized content.  \n  https://arxiv.org/abs/2604.13777v1\n\n## 🔥 Trending Repos\n- **Significant-Gravitas/AutoGPT** (Python) — still the heavyweight in autonomous-agent tooling.\n- **langchain-ai/langchain** (Python) — agent engineering platform remains highly active.\n- **NousResearch/hermes-agent** (Python) — fast-moving personal/extended agent framework.\n- **bytedance/deer-flow** (Python) — long-horizon super-agent orchestration harness.\n- **mem0ai/mem0** (Python) — memory layer remains hot in practical agent stacks.\n\n## 🎤 Standup One-Liner\n“We hardened the agent stack for auditability, tracked fresh legal-AI research, and kept one foot on shipping speed and the other on compliance receipts.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/22"
    },
    {
      "date": "2026-04-19",
      "title": "Lawful AI Daily Brief — 2026-04-19",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "🛠️ Tool Updates\n\n- **Claude Code (v2.1.113 / v2.1.114)** shipped stability + security polish: permission-dialog crash fix, safer Bash matching, better `/loop` wakeup control, and an MCP concurrent-call timeout fix.\n- **Codex CLI alpha train (`0.122.0-alpha.8/.9/.10`)** pushed practical upgrades: b",
      "content": "## 🛠️ Tool Updates\n\n- **Claude Code (v2.1.113 / v2.1.114)** shipped stability + security polish: permission-dialog crash fix, safer Bash matching, better `/loop` wakeup control, and an MCP concurrent-call timeout fix.\n- **Codex CLI alpha train (`0.122.0-alpha.8/.9/.10`)** pushed practical upgrades: better MCP stdio wiring, plugin-cache crash hardening, high-detail image defaults, and model metadata with max context windows.\n- **Ecosystem pulse:** `awesome-claude-code` activity looked mostly like data refresh churn (quiet on brand-new tools).\n\n## 💡 Tip of the Day\n\nIf your agent stack touches sensitive data, treat network egress policy like a seatbelt, not a “nice to have.”\n\n```toml\n[sandbox.network]\ndeniedDomains=[\"*.pastebin.com\",\"*.temp-mail.org\",\"*.anonfiles.com\"]\n```\n\nBonus move: include a quick “who called what” tool log in CI artifacts so compliance evidence is not a panic project later.\n\n## ⚖️ Legal x AI Watch\n\n- Compliance-flavored repos with fresh activity in the last day:\n  - [`SDL-HQ/sir-firewall`](https://github.com/SDL-HQ/sir-firewall) — deterministic pre-inference governance gate + signed audit trail framing.\n  - [`atharvajoshi01/finreg-ml`](https://github.com/atharvajoshi01/finreg-ml) — regulation-aware finance ML pipeline with explainability/fairness hooks.\n  - [`ZOLAtheCodeX/aigovops`](https://github.com/ZOLAtheCodeX/aigovops) — operations catalog for governance frameworks (EU AI Act, ISO/IEC 42001, NIST AI RMF).\n  - [`sanna-ai/sanna`](https://github.com/sanna-ai/sanna) — trust/compliance infrastructure for agents with cryptographic receipts.\n\n## 📚 Fresh Papers\n\n- **GDPR Auto-Formalization with AI Agents and Human Verification** — proposes automated GDPR formalization using LLMs + human verification loop.  \n  https://arxiv.org/abs/2604.14607v1\n- **BenGER: A Collaborative Web Platform for End-to-End Benchmarking of German Legal Tasks** — legal-task benchmarking platform for LLM reasoning evaluation.  \n  https://arxiv.org/abs/2604.13583v1\n- **From Anchors to Supervision: Memory-Graph Guided Corpus-Free Unlearning for Large Language Models** — corpus-free unlearning method relevant to privacy/copyright-sensitive memorization.  \n  https://arxiv.org/abs/2604.13777v1\n\n## 🔥 Trending Repos\n\n1. [`Significant-Gravitas/AutoGPT`](https://github.com/Significant-Gravitas/AutoGPT)\n2. [`f/prompts.chat`](https://github.com/f/prompts.chat)\n3. [`langgenius/dify`](https://github.com/langgenius/dify)\n4. [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain)\n5. [`firecrawl/firecrawl`](https://github.com/firecrawl/firecrawl)\n\n## 🧍 Standup One-Liner\n\nToday’s vibe: **agent tooling got sturdier, legal-tech repos stayed busy, and compliance moved one inch closer to “boring by design.”**\n\n---\nSource & archive: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/19"
    },
    {
      "date": "2026-04-18",
      "title": "Lawful AI Daily Brief — 2026-04-18",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "github"
      ],
      "summary": "🛠️ Tool Updates\n\n- **Claude Code v2.1.114** fixed a permission-dialog crash in agent-teams flows.\n- **Claude Code v2.1.113** pushed a chunky hardening pack:\n  - native per-platform binary launch\n  - `sandbox.network.deniedDomains` for explicit network blocking\n  - stricter Bash policy matching (`s",
      "content": "## 🛠️ Tool Updates\n\n- **Claude Code v2.1.114** fixed a permission-dialog crash in agent-teams flows.\n- **Claude Code v2.1.113** pushed a chunky hardening pack:\n  - native per-platform binary launch\n  - `sandbox.network.deniedDomains` for explicit network blocking\n  - stricter Bash policy matching (`sudo` wrappers, safer `find -exec/-delete`, stricter risky `rm` paths on macOS)\n  - MCP concurrent-call timeout fix and smoother `/loop` + `/ultrareview` behavior\n- **Codex** tightened trust boundaries in `.codex` for hooks/exec policies and improved plugin controls + skill-context budgeting.\n\n## 💡 Tip of the Day\n\nTreat trust as code, not vibes.\n\n```toml\n# .codex/config.toml\n[project]\ntrusted = false\n\n[security]\nrequireTrustForHooks = true\nrequireTrustForExecPolicy = true\n```\n\nWhy this slaps: it blocks “surprise automation” until the project is explicitly trusted. Safer defaults, calmer mornings ☕\n\n## 🔥 Trending Repos\n\n**Top new repos (recent):**\n- `zats/permiso` (Swift) — permission dialog UX for accessibility settings\n- `Fleet-Management-System` (Java) — newly active project\n- `SKILL-LINK-AI` (TypeScript)\n- `OHLC-Vol` (Python) — volatility forecasting from OHLC estimators\n\n**AI/LLM movers:**\n- `Significant-Gravitas/AutoGPT`\n- `langgenius/dify`\n- `open-webui/open-webui`\n- `firecrawl/firecrawl`\n- `infiniflow/ragflow`\n- `bytedance/deer-flow`\n- `mem0ai/mem0`\n\n## 🎤 Standup One-Liner\n\n\"We hardened agent tooling defaults, tightened trust boundaries, and kept our stack fast *and* less spooky in production.\" 👻✅\n\n---\nRepo: [lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/18"
    },
    {
      "date": "2026-04-17",
      "title": "Lawful AI Daily Brief — 2026-04-17",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-ai"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code** rolled out goodies for power users: `/less-permission-prompts`, `/ultrareview`, and the new **xhigh** effort tier. Translation: fewer papercuts, more shipping.\n- **Codex CLI** keeps evolving from “just a CLI” toward agent platform plumbing (marketplace add-ons, be",
      "content": "## 🛠️ Tool Updates\n- **Claude Code** rolled out goodies for power users: `/less-permission-prompts`, `/ultrareview`, and the new **xhigh** effort tier. Translation: fewer papercuts, more shipping.\n- **Codex CLI** keeps evolving from “just a CLI” toward agent platform plumbing (marketplace add-ons, better MCP namespacing, memory controls, sandbox hardening).\n- **Signal check:** the awesome-claude-code ecosystem looked maintenance-heavy in the latest commit window (more refresh/ops than net-new capabilities).\n\n## 💡 Tip of the Day\nIf your legal-AI workflow touches MCP tools, log provenance *every single time* — you’ll thank yourself at audit o’clock.\n\n```json\n{\n  \"tool\": \"legal-rag\",\n  \"input_hash\": \"sha256:...\",\n  \"source_doc_ids\": [\"doc_12\", \"doc_89\"],\n  \"timestamp\": \"2026-04-17T05:10:00Z\",\n  \"operator\": \"automation\"\n}\n```\n\n\n## ⚖️ Legal x AI Watch\n- Freshly updated repos in the EU-AI-regulation orbit today include:\n  - `csaikia23/cap-srp` (AI provenance/accountability with EU AI Act tagging)\n  - `bluethestyle/aws_ple_for_financial` (financial AI + compliance-oriented framing)\n  - `JLBird/ramon-loya-RTK-1` (LLM red teaming + compliance evidence)\n- Practical takeaway: compliance tooling is converging around **traceability + testability**. “Model did a thing” is no longer enough — show **how** and **why**.\n\n## 📚 Fresh Papers\n- **The Missing Knowledge Layer in AI: A Framework for Stable Human-AI Reasoning**  \n  Framework for making human-AI reasoning more stable in high-stakes contexts (including law).  \n  https://arxiv.org/abs/2604.14881v1\n\n- **Generalization in LLM Problem Solving: The Case of the Shortest Path**  \n  Examines whether LLMs truly generalize on algorithmic tasks vs pattern-match.  \n  https://arxiv.org/abs/2604.15306v1\n\n- **Diagnosing LLM Judge Reliability: Conformal Prediction Sets and Transitivity Violations**  \n  Reliability diagnostics for LLM-as-judge setups using uncertainty + consistency checks.  \n  https://arxiv.org/abs/2604.15302v1\n\n## 🗣️ Standup One-Liner\n\"Today’s vibe: fewer prompt acrobatics, more auditable agent ops — compliance with receipts.\" ✅\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/17"
    },
    {
      "date": "2026-04-16",
      "title": "Lawful AI Daily Brief — 2026-04-16",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "legal-tech"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.110** dropped a proper operator starter pack: `/tui fullscreen`, `/focus`, mobile push notifications, cleaner `/plugin` triage, and scheduled tasks that wake back up on `--resume`.\n- Hooks/MCP reliability got a polish pass: fewer permission-hook weirdos, fewer",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.110** dropped a proper operator starter pack: `/tui fullscreen`, `/focus`, mobile push notifications, cleaner `/plugin` triage, and scheduled tasks that wake back up on `--resume`.\n- Hooks/MCP reliability got a polish pass: fewer permission-hook weirdos, fewer disconnect hangs, less “why is this frozen?” energy.\n- **Codex rust-v0.121.0** added `codex marketplace add` (GitHub/local/URL), tighter MCP namespacing, improved `Ctrl+R` history in TUI, plus stronger sandbox/devcontainer hardening.\n\n## 💡 Tip of the Day\nIf your legal/research prompts keep losing context, force a longer prompt-cache TTL and regain your sanity:\n\n```bash\n# keep context warm longer for heavy legal workflows\nexport ENABLE_PROMPT_CACHING_1H=1\n\nclaude\n/tui fullscreen\n/focus\n```\n\nBonus move for MCP-based compliance stacks:\n\n```yaml\nservers:\n  legal-regs:\n    command: uvx\n    args: [eurlex-mcp]\n    metadata:\n      jurisdiction: EU\n      retention_days: 30\n      lawful_basis: \"Art.6(1)(f)\"\n```\n\n## ⚖️ Legal x AI Watch\n- GitHub activity around **EU AI Act / AI regulation** topics stayed lively, with newly pushed repos focused on:\n  - open governance knowledge packs,\n  - compliance guardrails for agent systems,\n  - enterprise governance toolkits mapping to standards like **ISO/IEC 42001** and AI risk frameworks.\n- Practical takeaway: teams are investing in **compliance-by-design scaffolding** (controls, documentation, governance templates) rather than one-off policy docs.\n\n## 📚 Fresh Papers\n- **From Anchors to Supervision: Memory-Graph Guided Corpus-Free Unlearning for LLMs** — proposes corpus-free unlearning for removing memorized sensitive/copyrighted content (very compliance-relevant).  \n  http://arxiv.org/abs/2604.13777v1\n- **RPS: Information Elicitation with Reinforcement Prompt Selection** — reinforcement-based prompting to elicit latent user info in dialogue workflows.  \n  http://arxiv.org/abs/2604.13817v1\n- **From P(y|x) to P(y): RL in Pre-train Space** — explores pushing optimization into pre-train space to expand reasoning performance.  \n  http://arxiv.org/abs/2604.14142v1\n- **From Feelings to Metrics: Formalizing Vibe-Testing** — maps subjective “vibe checks” to measurable LLM evaluation signals.  \n  http://arxiv.org/abs/2604.14137v1\n\n## 🔥 Trending Repos\n- **thegdsks/awesome-modern-cli** (⭐ 31) — curated modern CLI alternatives.\n- **Mondo-Robotics/DiT4DiT** (⭐ 14) — vision-action model framework.\n- **zhengnaichuan2022/PAS-Net** (⭐ 13) — physics-aware spiking NN for HAR.\n- **ChaosRealmsAI/NextFrame** (⭐ 11) — Rust AI video timeline engine.\n\nAI/LLM heavyweight pulse (still dominating): AutoGPT, dify, langchain, firecrawl, ragflow, mem0.\n\n## 🧍 Standup One-Liner\nWe upgraded the AI engineering stack for fewer ops headaches, spotted fresh compliance-first repo momentum, and flagged new unlearning research that actually matters for legal risk.\n\n---\n🔗 Repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/16"
    },
    {
      "date": "2026-04-15",
      "title": "Lawful AI Daily Brief — 2026-04-15",
      "tags": [
        "lawful-ai",
        "ai-engineering",
        "legal-tech",
        "papers",
        "github-trending"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.108** dropped quality-of-life bangers: `/recap`, better `/resume`, cleaner rate-limit diagnostics, and sharper model-switch warnings. Less “is this stuck?” panic, more “chef is cooking.” 🍳\n- **Claude v2.1.109/107** improved progress signaling during longer run",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.108** dropped quality-of-life bangers: `/recap`, better `/resume`, cleaner rate-limit diagnostics, and sharper model-switch warnings. Less “is this stuck?” panic, more “chef is cooking.” 🍳\n- **Claude v2.1.109/107** improved progress signaling during longer runs — tiny UX tweak, huge cortisol savings.\n- **Codex CLI 0.121.0-alpha.* ** kept shipping fast (multiple alpha bumps in ~24h). Translation: pin versions in CI unless your team enjoys surprise plot twists before standup.\n- Community signal from `awesome-claude-code`: desktop/local-first agent workflows keep getting attention.\n\n## 💡 Tip of the Day\nIf your workflow touches sensitive legal/compliance context, split cache policy by risk level instead of one-size-fits-all.\n\n```bash\n#!/usr/bin/env bash\n# risk-aware launcher\nexport ENABLE_PROMPT_CACHING_1H=1\n[ \"$MATTER\" = \"sensitive\" ] && export FORCE_PROMPT_CACHING_5M=1\nclaude /recap\nclaude /security-review\n```\n\nWhy this slaps: faster iteration on normal work, tighter retention posture on spicy matters. ⚖️\n\n## ⚖️ Legal x AI Watch\n- Fresh legal/compliance-flavored repos updated recently:\n  - `csaikia23/cap-srp` — cryptographic proof patterns for AI safety/accountability.\n  - `CSOAI-ORG/watermarking-authenticity-mcp` — references EU AI Act Article 50 watermarking/compliance.\n  - `Alvoradozerouno/GENESIS-v10.1` — EU banking compliance framing with AI Act angle.\n- Practical compliance nudge: if you enable long-lived prompt/session caching, treat cached artifacts as governed processing data (define TTL + retention and document lawful basis per workflow).\n\n## 🧪 Fresh Papers\n- **Operationalising the Right to be Forgotten in LLMs** (Kurt, Afli) — lightweight sequential unlearning for privacy-aligned deployment.  \n  http://arxiv.org/abs/2604.12459\n- **ContextLens: Modeling Imperfect Privacy and Safety Context for Legal Compliance** (Li, Chen, Jing) — context-aware compliance modeling for AI systems.  \n  http://arxiv.org/abs/2604.12308\n- **The Verification Tax: Fundamental Limits of AI Auditing in the Rare-Error Regime** (Wang) — why proving ultra-low failure rates is statistically expensive (and governance-relevant).  \n  http://arxiv.org/abs/2604.12951\n- **Lightning OPD** (Wu et al.) — cheaper post-training recipe for stronger reasoning models.  \n  http://arxiv.org/abs/2604.13010\n- **One Token Away from Collapse** (Potraghloo et al.) — instruction-tuned helpfulness can be surprisingly fragile under tiny perturbations.  \n  (from today’s arXiv digest)\n\n## 📈 Trending Repos\n**Top new-ish repos in snapshot:**\n- `ChatPRD/tradclaw` ⭐ 43 — AI household manager / parenting assistant angle.\n- `quinngarcia41/Identity-Lab-Spoofer` ⭐ 27\n- `cshitian/antigravity_chinese` ⭐ 11\n\n**Still dominating AI/LLM momentum:**\n- `Significant-Gravitas/AutoGPT` ⭐ 183k+\n- `f/prompts.chat` ⭐ 159k+\n- `langgenius/dify` ⭐ 137k+\n- `langchain-ai/langchain` ⭐ 133k+\n- `open-webui/open-webui` ⭐ 131k+\n\n## 🎤 Standup One-Liner\n“Today we tightened our legal-AI posture: faster agent workflows, risk-aware cache controls, and fresh signals from privacy-unlearning + auditability research — compliance with less drag.”\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/15"
    },
    {
      "date": "2026-04-14",
      "title": "Lawful AI Daily Brief — 2026-04-14",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.105** dropped with some real quality-of-life muscle: explicit worktree entry, stronger pre-compact blocking, better `/doctor`, and cleaner web fetch noise filtering. Translation: less \"why is this stuck?\", more \"already merged\" ⚡\n- **Codex CLI alpha (`0.121.0-a",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.105** dropped with some real quality-of-life muscle: explicit worktree entry, stronger pre-compact blocking, better `/doctor`, and cleaner web fetch noise filtering. Translation: less \"why is this stuck?\", more \"already merged\" ⚡\n- **Codex CLI alpha (`0.121.0-alpha.*`)** is iterating fast, while recent stable changes keep pushing visibility: better background progress streaming, clearer hook activity, and safer typed tool outputs.\n- **Community pulse:** `awesome-claude-code` activity looks like maintenance/curation mode this cycle. Fewer fireworks, more sharpening.\n\n## 💡 Tip of the Day\nIf your agent touches anything regulated, make audit evidence automatic (not optional):\n\n```python\nimport hashlib, json, time\n\ndef audit_event(actor, action, payload, result):\n    record = {\n        \"ts\": int(time.time()),\n        \"actor\": actor,\n        \"action\": action,\n        \"input_hash\": hashlib.sha256(json.dumps(payload, sort_keys=True).encode()).hexdigest(),\n        \"output_hash\": hashlib.sha256(json.dumps(result, sort_keys=True).encode()).hexdigest(),\n    }\n    print(json.dumps(record))  # send to append-only store\n```\n\nOne tiny pattern, huge compliance headache avoided later. 🧾\n\n## ⚖️ Legal x AI Watch\n- **Repo momentum around EU AI Act compliance tooling** keeps building, especially around traceability and tamper-evident logs.\n- Notable updates in the last 24h:\n  - [`airblackbox/air-trust`](https://github.com/airblackbox/air-trust) — audit chains + signing + compliance-oriented safeguards.\n  - [`csaikia23/cap-srp`](https://github.com/csaikia23/cap-srp) — cryptographic proof patterns for AI safety/accountability.\n  - [`unterdacker/venshield`](https://github.com/unterdacker/venshield) — vendor-risk workflow with human-in-the-loop + audit logging.\n- Practical takeaway: **compliance products are converging on verifiability-by-default** (hashes, logs, provenance), not policy PDFs.\n\n## 📚 Fresh Papers\n- **Legal2LogicICL** — diverse few-shot strategies to improve legal case → logical formula generalization.  \n  https://arxiv.org/abs/2604.11699v1\n- **AI Integrity: A New Paradigm for Verifiable AI Governance** — governance-first framing for verifiable oversight.  \n  https://arxiv.org/abs/2604.11445\n- **A Mechanistic Analysis of Looped Reasoning Language Models** — dissects looped reasoning behavior in LMs.  \n  https://arxiv.org/abs/2604.11791v1\n- **Psychological Concept Neurons** — probes and steers personality/bias-related generation in LLMs.  \n  https://arxiv.org/abs/2604.11802v1\n- **C-ReD** — benchmark for AI-generated Chinese text detection from real-world prompts.  \n  https://arxiv.org/abs/2604.11796v1\n\n## 🔥 Trending Repos\n- **New repos with fast early stars:**\n  - `shaom/svg-hand-drawn-skill` (JS) ⭐ 19\n  - `0xAstroAlpha/Vidtory-Seedance-2.0-Drama-Studio` (TS) ⭐ 17\n- **Still dominating AI/LLM attention:**\n  - `Significant-Gravitas/AutoGPT`\n  - `langgenius/dify`\n  - `open-webui/open-webui`\n  - `infiniflow/ragflow`\n  - `NousResearch/hermes-agent`\n\n## 🧍 Standup One-Liner\nShipped signal over noise: better agent tooling, stronger compliance primitives, and a fresh paper stream worth stealing ideas from before lunch. 🍽️\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/14"
    },
    {
      "date": "2026-04-13",
      "title": "Lawful AI Daily Brief — 2026-04-13",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code:** `v2.1.104` landed with a quiet changelog and fast cadence — treat this as a **stability pulse** and pin before broad rollout.\n- **Codex CLI:** fresh alpha train (`rust-v0.121.0-alpha.1/.2`) is moving fast; great for sandbox validation, less great for surprise-pro",
      "content": "## 🛠️ Tool Updates\n- **Claude Code:** `v2.1.104` landed with a quiet changelog and fast cadence — treat this as a **stability pulse** and pin before broad rollout.\n- **Codex CLI:** fresh alpha train (`rust-v0.121.0-alpha.1/.2`) is moving fast; great for sandbox validation, less great for surprise-prod adventures.\n- **Community vibe:** `awesome-claude-code` recent commits were mostly maintenance/ticker chores — ecosystem looks in consolidation mode, not feature frenzy mode.\n\n## 💡 Tip of the Day\nIf you want EU-AI-Act-friendly traceability without building a whole governance platform at 7 a.m., start with immutable audit lines per tool run:\n\n```bash\n# append-only audit record for each agent/tool output\nprintf '%s actor=%s tool=%s out_sha=%s\\n' \\\n  \"$(date -Iseconds)\" \"$USER\" \"contract_check\" \\\n  \"$(sha256sum result.json | cut -d' ' -f1)\" \\\n  >> audit.log\n```\n\nTiny log, big compliance energy. ⚖️✨\n\n## ⚖️ Legal x AI Watch\n- Legal/compliance repos with fresh activity:\n  - `csaikia23/cap-srp` — cryptographic proof + accountability framing for AI safety filters.\n  - `eric-devismes/ai-register-eu` — EU compliance register-style database for enterprise AI systems.\n  - `airblackbox/air-trust` — tamper-evident audit chain for agent handoffs.\n  - `airblackbox/gateway` — EU AI Act compliance scanner with trust-layer positioning.\n- **Practical takeaway:** teams are clearly converging on **traceability + auditability** as the first-class compliance primitive (not optional garnish).\n\n## 🔥 Trending Repos\nTop active AI/LLM projects in the latest trending snapshot:\n- `f/prompts.chat` — 159k★ — prompt sharing at massive scale.\n- `langgenius/dify` — 137k★ — agentic workflow platform.\n- `hiyouga/LlamaFactory` — 69k★ — fine-tuning stack for many LLM/VLM families.\n- `NousResearch/hermes-agent` — 60k★ — agent framework momentum continues.\n- `bytedance/deer-flow` — 60k★ — long-horizon super-agent harness.\n\n## 🗣️ Standup One-Liner\n\"We’re shipping faster, but today’s real upgrade is invisible: every meaningful AI action now leaves a trail legal can actually love.\" 😎\n\n---\nRepo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/13"
    },
    {
      "date": "2026-04-12",
      "title": "Lawful AI Daily Brief — 2026-04-12",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.101** dropped quality-of-life upgrades: better focus/brief behavior, improved team onboarding flow, and cleaner handling for rate-limit/tool errors.\n- **Codex CLI 0.120.0** added background progress streaming for long-running jobs + nicer hook visibility in TUI",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.101** dropped quality-of-life upgrades: better focus/brief behavior, improved team onboarding flow, and cleaner handling for rate-limit/tool errors.\n- **Codex CLI 0.120.0** added background progress streaming for long-running jobs + nicer hook visibility in TUI.\n- Practical takeaway: agent runs are getting more observable, less “is this thing alive?” and more “yep, it’s cooking.” 🍳\n\n## 💡 Tip of the Day\nIf your AI workflows touch compliance-sensitive data, log *every* tool call like it might be read by future-you during an audit (because it will).\n\n```bash\n#!/usr/bin/env bash\n# tiny audit append helper\ntool=\"$1\"; purpose=\"$2\"; data_class=\"$3\"\nprintf '%s tool=%s purpose=%s data_class=%s retention=30d\\n' \\\n  \"$(date -Iseconds)\" \"$tool\" \"$purpose\" \"$data_class\" \\\n  >> .ai-audit.log\n```\n\n## ⚖️ Legal x AI Watch\n- Freshly updated legal/compliance-flavored repos include:\n  - [`Alvoradozerouno/GENESIS-v10.1`](https://github.com/Alvoradozerouno/GENESIS-v10.1) — Sovereign AI OS for EU banking compliance (AI Act + CRR III + MiCAR)\n  - [`eric-devismes/ai-register-eu`](https://github.com/eric-devismes/ai-register-eu) — Enterprise AI systems compliance database\n  - [`unterdacker/assessly`](https://github.com/unterdacker/assessly) — Vendor risk dashboard with AI-assisted review and audit logs\n- Compliance angle for this week: if your agents can act, your logs must explain **who triggered what, when, and why**.\n\n## 📈 Trending Repos\n- `Significant-Gravitas/AutoGPT` ⭐ 183k — still the heavyweight in open agent ecosystems.\n- `langgenius/dify` ⭐ 137k — agentic workflow platform momentum remains strong.\n- `firecrawl/firecrawl` ⭐ 107k — web data for agents is still hot infrastructure.\n- `bytedance/deer-flow` ⭐ 60k — long-horizon “superagent” harness keeps climbing.\n\n## 🗣️ Standup One-Liner\n“Yesterday we improved agent observability and auditability—so now our AI stack is faster to operate *and* easier to defend in a compliance review.”\n\n---\n🔗 Repo: https://github.com/laugustyniak/lawful-ai-staging",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/12"
    },
    {
      "date": "2026-04-11",
      "title": "Lawful AI Daily Brief — 2026-04-11",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-engineering",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code v2.1.101** dropped with a new `/team-onboarding` command, tighter hook/settings safety, MCP + subagent inheritance fixes, and a command-injection patch. Net effect: fewer “wait, why did it do that?” moments.\n- **Codex CLI 0.120.0** added Realtime V2 visibility for b",
      "content": "## 🛠️ Tool Updates\n- **Claude Code v2.1.101** dropped with a new `/team-onboarding` command, tighter hook/settings safety, MCP + subagent inheritance fixes, and a command-injection patch. Net effect: fewer “wait, why did it do that?” moments.\n- **Codex CLI 0.120.0** added Realtime V2 visibility for background progress, cleaner hook introspection, and richer MCP `outputSchema` typing.\n- Community pulse check: `awesome-claude-code` activity is mostly curation chores right now, not major new tool drops.\n\n## 💡 Tip of the Day\nIf your compliance brain likes receipts (and it should), make every tool action append to an immutable audit log:\n\n```bash\njq -c '{ts:.timestamp,user:.actor,tool:.tool,input_hash:.input_sha256}' tool-events.jsonl \\\n  >> ai_audit_log.jsonl\n```\n\nTiny command, huge upside for traceability, incident review, and EU-AI-Act-friendly documentation hygiene.\n\n## ⚖️ Legal x AI Watch\n- Fresh legal/compliance-flavored repo activity includes:\n  - [`Alvoradozerouno/GENESIS-v10.1`](https://github.com/Alvoradozerouno/GENESIS-v10.1) — sovereign AI OS framing for EU banking compliance (AI Act/CRR III/MiCAR).\n  - [`eric-devismes/ai-register-eu`](https://github.com/eric-devismes/ai-register-eu) — enterprise AI systems compliance database concept.\n  - [`csaikia23/cap-srp`](https://github.com/csaikia23/cap-srp) — cryptographic accountability and tamper-evident AI safety/provenance logs.\n- Practical takeaway: if your product can’t answer **“who triggered what, when, with which policy?”** in one query, today’s a good day to fix that.\n\n## 📚 Fresh Papers\n- The latest arXiv sweep (last 24h window used by the digest job) reported **no new matches** across Legal AI, LLMs, RAG, and AI agents.\n- Good day to read your backlog instead of pretending you’ll “totally get to those 47 open tabs later.” 😅\n\n## 🔥 Trending Repos\n- **New repo momentum:**\n  - [`AMAP-ML/SkillClaw`](https://github.com/AMAP-ML/SkillClaw) — ⭐ 75 — “Let skills evolve collectively with an agentic evolver.”\n- **Still dominating AI/LLM activity:**\n  - [`Significant-Gravitas/AutoGPT`](https://github.com/Significant-Gravitas/AutoGPT)\n  - [`langgenius/dify`](https://github.com/langgenius/dify)\n  - [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain)\n  - [`open-webui/open-webui`](https://github.com/open-webui/open-webui)\n  - [`firecrawl/firecrawl`](https://github.com/firecrawl/firecrawl)\n\n## 🎤 Standup One-Liner\n“Yesterday we improved agent observability and compliance traceability — today we ship faster *and* sleep better.”\n\n---\n🔗 Repo: [lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/11"
    },
    {
      "date": "2026-04-10",
      "title": "Lawful AI Daily Brief — 2026-04-10",
      "tags": [
        "lawful-ai",
        "daily-brief",
        "ai-regulation",
        "developer-tools"
      ],
      "summary": "💡 Tip of the Day\n\nUse structured output gates to keep your agent deterministic under pressure:\n\n```yaml\npolicy:\n  output_contract: markdown\n  max_sections: 6\n  skip_on_error: true\n  include_only_with_evidence: true\n```\n\nTiny config, huge chaos reduction. Your future self says thanks. 😎\n\n⚖️ Legal",
      "content": "## 💡 Tip of the Day\n\nUse structured output gates to keep your agent deterministic under pressure:\n\n```yaml\npolicy:\n  output_contract: markdown\n  max_sections: 6\n  skip_on_error: true\n  include_only_with_evidence: true\n```\n\nTiny config, huge chaos reduction. Your future self says thanks. 😎\n\n## ⚖️ Legal x AI Watch\n\n**Compliance + regulation repos (freshly active)**\n- [Alvoradozerouno/GENESIS-v10.1](https://github.com/Alvoradozerouno/GENESIS-v10.1) — ★1, pushed 2026-04-10\n- [Unawakened-landlord758/ClawGuard](https://github.com/Unawakened-landlord758/ClawGuard) — ★0, pushed 2026-04-10\n- [morgancertifiable319/china-ai-compliance](https://github.com/morgancertifiable319/china-ai-compliance) — ★0, pushed 2026-04-10\n- [luniimaru-hue/ai-governance-knowledge](https://github.com/luniimaru-hue/ai-governance-knowledge) — ★1, pushed 2026-04-10\n- [ivanmoralesf2015-sudo/enterprise-governance-toolkit](https://github.com/ivanmoralesf2015-sudo/enterprise-governance-toolkit) — ★0, pushed 2026-04-10\n\n## 🎤 Standup One-Liner\n\nWe shipped signal, skipped noise, and kept compliance in the loop without killing builder velocity. 🚀\n\n---\nSource repo: <https://github.com/laugustyniak/lawful-ai-staging>",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/10"
    },
    {
      "date": "2026-04-09",
      "title": "Lawful AI Daily Brief — 2026-04-09",
      "tags": [
        "daily-brief",
        "lawful-ai",
        "ai-engineering",
        "legal-ai",
        "compliance"
      ],
      "summary": "🛠️ Tool Updates\n\n- **Claude Code sprinted again**: `v2.1.97/96/94` landed in quick succession.\n  - Focus view (`Ctrl+O`) + less UI flicker\n  - Better `/agents` visibility\n  - Bedrock auth got patched before anyone could start yelling at logs\n- **Codex CLI** is still in caffeinated alpha mode (`0.1",
      "content": "## 🛠️ Tool Updates\n\n- **Claude Code sprinted again**: `v2.1.97/96/94` landed in quick succession.\n  - Focus view (`Ctrl+O`) + less UI flicker\n  - Better `/agents` visibility\n  - Bedrock auth got patched before anyone could start yelling at logs\n- **Codex CLI** is still in caffeinated alpha mode (`0.119.0-alpha.24/.25/.26`).\n  - Translation: pin versions in CI unless you enjoy surprise side quests.\n- **Community pulse (awesome-claude-code)**: mostly ticker/SVG chores in latest commits, not a huge wave of fresh workflow patterns.\n\n## 💡 Tip of the Day\n\n**Freeze fast-moving agent tooling so compliance artifacts stay reproducible.**\n\n```bash\n# Pin current Codex release tag into CI metadata\nCODEX_TAG=$(curl -s https://api.github.com/repos/openai/codex/releases?per_page=1 | jq -r '.[0].tag_name')\nprintf \"codex_version=%s\\n\" \"$CODEX_TAG\" >> .ai/build-metadata.env\n\ngit diff --name-only > .ai/changed_files.txt\nprintf \"%s | model=%s | task=%s\\n\" \"$(date -Iseconds)\" \"claude/codex\" \"${TASK_ID:-daily-brief}\" >> .ai/audit.log\n```\n\nMinimal effort, maximal “yes auditor, we *can* explain this output.” 😎\n\n## ⚖️ Legal x AI Watch\n\n- **Legal-AI repos updated in the last day:**\n  - `arsitekberotot/arsitrad` — Indonesian building-regulation chatbot (RAG + fine-tuned Gemma)\n  - `agentauditAI/AgentAudit` — immutable AI-agent audit logs, EU AI Act compliance angle\n  - `in-variant/invariant-website` — AI for engineering regulations\n  - `Alvoradozerouno/GENESIS-v10.1` — sovereign AI + EU banking compliance framing\n- **Compliance reality check (the non-boring version):**\n  - **EU AI Act:** keep traceability + technical docs alive from day one\n  - **Copyright/IP:** store provenance for generated snippets and training/source assumptions\n  - **GDPR:** minimize personal data in prompts, logs, and eval datasets\n\n## 📚 Fresh Papers\n\nTop picks from the latest digest:\n\n- **Strategic Persuasion with Trait-Conditioned Multi-Agent Systems for Iterative Legal Argumentation**  \n  Multi-agent courtroom simulation where trait diversity improved legal argument strategy.  \n  <https://arxiv.org/abs/2604.07028v1>\n\n- **Luwen Technical Report**  \n  Legal-domain LLM effort focused on legal-text complexity and specialist reasoning behavior.  \n  <https://arxiv.org/abs/2604.06737v1>\n\n- **Personalized RewardBench**  \n  Shows reward models still struggle with user-specific alignment at personalization depth.  \n  <https://arxiv.org/abs/2604.07343v1>\n\n## 📈 Trending Repos\n\nFrom the latest GitHub trending snapshot:\n\n- **AI/LLM active giants still moving:**\n  - `Significant-Gravitas/AutoGPT`\n  - `langgenius/dify`\n  - `langchain-ai/langchain`\n  - `infiniflow/ragflow`\n  - `continuedev/continue`\n- **New repo lane was thin today** (only 2 qualifying results in that query window), which usually means either niche day… or everyone pushed straight to existing monorepos.\n\n## 🗣️ Standup One-Liner\n\n“Today’s move: lock fast-moving agent tool versions, keep an immutable audit trail, and treat compliance like a feature—not a postmortem hobby.”\n\n---\nRepo: <https://github.com/laugustyniak/lawful-ai-staging>",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/09"
    },
    {
      "date": "2026-04-08",
      "title": "Lawful AI Daily Brief — 2026-04-08",
      "tags": [
        "lawful-ai",
        "eu-ai-act",
        "compliance",
        "llm",
        "daily-brief"
      ],
      "summary": "🛠️ Tool Updates\n- **Claude Code:** v2.1.96 shipped a Bedrock auth hotfix after v2.1.94’s feature blast (Mantle mode, better `/effort`, sharper MCP/Slack behavior). Translation: fast shipping + fast bug squash.\n- **Codex CLI:** Alpha train keeps rolling (`0.119.0-alpha.*`) with infra hardening vibe",
      "content": "## 🛠️ Tool Updates\n- **Claude Code:** v2.1.96 shipped a Bedrock auth hotfix after v2.1.94’s feature blast (Mantle mode, better `/effort`, sharper MCP/Slack behavior). Translation: fast shipping + fast bug squash.\n- **Codex CLI:** Alpha train keeps rolling (`0.119.0-alpha.*`) with infra hardening vibes: sandbox polish, artifact/signing maturity, and runtime reliability work.\n- **Community pulse:** `awesome-claude-code` latest window = mostly repo ticker/visual churn. Low signal for brand-new skills this cycle.\n\n## 💡 Tip of the Day\nIf your legal-AI workflows touch regulated outputs, bake auditability in *before* shipping features:\n\n```bash\n# Tiny but mighty: keep an immutable trail for compliance evidence\nrun_id=$(date +%s)\nprintf \"%s\\t%s\\t%s\\t%s\\n\" \\\n  \"$run_id\" \"$(git rev-parse --short HEAD)\" \"gpt-5.3-codex\" \"prompt_sha:$(echo -n \"$PROMPT\" | sha256sum | cut -d' ' -f1)\" \\\n  >> audit/agent-runs.tsv\n```\n\nFuture-you (and compliance-you) will send present-you flowers. 🌷\n\n## ⚖️ Legal x AI Watch\n- **Copyright/IP angle:** Still high-priority to log model/version + prompt hash + output/code diff lineage. This is the practical path to provenance when ownership questions appear.\n- **Compliance signal from repos updated in last 24h:**\n  - `Unawakened-landlord758/ClawGuard` — OpenClaw safety/guardrails, explicit EU AI Act framing.\n  - `luniimaru-hue/ai-governance-knowledge` — governance artifacts + decision logging resources.\n  - `Alvoradozerouno/GENESIS-v10.1` — sovereign banking compliance stack (AI Act + CRR III + MiCAR references).\n  - `arsitekberotot/arsitrad` — regulation-focused RAG chatbot pattern.\n\n## 📚 Fresh Papers\nHighlights from the latest paper digest window:\n- **Paper Circle** (`arXiv:2604.06170`) — multi-agent framework for research discovery/synthesis.\n- **In-Place Test-Time Training** (`arXiv:2604.06169`) — dynamic LLM adaptation at inference time.\n- **Retriever bias in RAG** (`arXiv:2604.06163`, `2604.06097`) — bias toward LLM text and query-rewrite effects.\n\nLegal-specific papers were quiet in this latest 24h cut, but the retrieval-bias thread is directly relevant for legal QA reliability.\n\n## 🔥 Trending Repos\nFrom the latest trending snapshot + legal-AI search cross-check:\n- **Significant-Gravitas/AutoGPT**\n- **langgenius/dify**\n- **langchain-ai/langchain**\n- **firecrawl/firecrawl**\n- **Unawakened-landlord758/ClawGuard** (legal/compliance-adjacent signal)\n\nShort take: general agent infra is still sprinting; governance-aware wrappers are now visibly piggybacking that momentum.\n\n## 🎤 Standup One-Liner\n“Core agent tooling got sturdier overnight, and governance-focused repos are heating up—today’s move is to treat audit trails as a feature, not paperwork.”\n\n---\nSource repo: [lawful-ai-staging](https://github.com/laugustyniak/lawful-ai-staging)",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/08"
    },
    {
      "date": "2026-04-07",
      "title": "AI Intelligence Brief — April 7, 2026",
      "tags": [
        "claude-code",
        "codex",
        "legal-ai",
        "arxiv",
        "trending"
      ],
      "summary": "🧠 AI Intelligence Brief — April 7, 2026\n\n*Where law meets code meets caffeine ☕*\n\n🔧 Tool Updates\n\nClaude Code (v2.1.92)\n- 🆕 Interactive Bedrock setup wizard for AWS authentication\n- 💰 Per-model and cache-hit breakdown added to `/cost`\n- ⚡ 60% faster large-file write diffs — your monorepo rejoic",
      "content": "# 🧠 AI Intelligence Brief — April 7, 2026\n\n*Where law meets code meets caffeine ☕*\n\n## 🔧 Tool Updates\n\n### Claude Code (v2.1.92)\n- 🆕 Interactive Bedrock setup wizard for AWS authentication\n- 💰 Per-model and cache-hit breakdown added to `/cost`\n- ⚡ 60% faster large-file write diffs — your monorepo rejoices\n- 🔧 Fixed subagent spawning after tmux windows killed\n\n### OpenAI Codex (0.119.0-alpha.12)\n- 📦 New `codex-responses-api-proxy` and `codex-command-runner` artifacts\n- 🪟 Windows sandbox setup tooling — Codex finally acknowledges Windows exists\n- 🔐 Signed multi-platform builds with `.sigstore` attestation\n\n## 💡 Tip of the Day\n\n> **Fail-closed managed settings** — one line of config, zero policy drift. Your compliance team will love you. Your inner rebel won't.\n\n```json\n{ \"policies\": { \"forceRemoteSettingsRefresh\": true } }\n```\n\n## ⚖️ Legal × AI Watch\n\n*The intersection most engineers ignore and most lawyers don't understand. That's your edge.*\n\n- 🇪🇺 **EU AI Act enforcement update** — High-risk AI systems now require documented testing logs. If your CI doesn't produce audit trails, you're already behind.\n- 📜 **Copyright & AI-generated code** — New USPTO guidance suggests AI-assisted code may qualify for copyright if human selection and arrangement is documented. TL;DR: your PR review history is your IP proof.\n- 🔍 **Compliance tip:** Add `--log-level info` to your Claude Code sessions when working on regulated systems. The session log becomes your audit trail.\n\n## 📚 Fresh Papers\n\n- 🏛️ **[DeonticBench: A Benchmark for Reasoning over Rules](https://arxiv.org/abs/2604.04443)** — Dou et al. 6,232 tasks on obligations, permissions, and prohibitions. Best frontier LLM hits only 44.4% — turns out reading the law is hard even for GPT-5. Prolog-backed reasoning included.\n- 🤖 **[Adaptive Cost-Efficient Evaluation for Patent Claim Validation](https://arxiv.org/abs/2604.04295)** — Yoo et al. Hybrid LLM framework achieves 94.95% F1 while cutting costs 78%. Your patent team will want this yesterday.\n\n## 🔥 Trending Repos\n\n- ⭐ **[hesreallyhim/awesome-claude-code](https://github.com/hesreallyhim/awesome-claude-code)** (36.9k★) — The definitive skills/hooks/MCP collection\n- ⭐ **[ruvnet/ruflo](https://github.com/ruvnet/ruflo)** (30.3k★) — Agent orchestration platform for Claude\n\n## 🎙️ Standup One-Liner\n\n*Drop this in your next meeting and sound unreasonably well-informed:*\n\n> \"The EU AI Act's logging requirements basically mean our CI pipeline is now a legal document — I've been looking at what we'd need to change.\"\n\n---\n\n*Generated by [Lawful AI](https://github.com/laugustyniak/lawful-ai) 🦞 — daily AI engineering intelligence with a legal edge.*\n*Curated by [@laugustyniak](https://github.com/laugustyniak) — because someone has to read the regulations so you don't have to.*",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/07"
    },
    {
      "date": "2026-04-06",
      "title": "AI Intelligence Brief — April 6, 2026",
      "tags": [
        "claude-code",
        "codex",
        "legal-ai",
        "arxiv",
        "trending"
      ],
      "summary": "🧠 AI Intelligence Brief — April 6, 2026\n\n*Where law meets code meets caffeine ☕*\n\n🔧 Tool Updates\n\nClaude Code\n\n📭 No release today. The week gave us v2.1.89 through v2.1.92 — four releases in six days. If you're keeping score: interactive lessons, MCP persistence at 500K, executable plugins, fail",
      "content": "# 🧠 AI Intelligence Brief — April 6, 2026\n\n*Where law meets code meets caffeine ☕*\n\n## 🔧 Tool Updates\n\n### Claude Code\n\n📭 No release today. The week gave us v2.1.89 through v2.1.92 — four releases in six days. If you're keeping score: interactive lessons, MCP persistence at 500K, executable plugins, fail-closed enterprise policies, Bedrock wizards, per-model cost tracking, and 60% faster diffs. Not bad for a week.\n\n### Codex\n\n📭 No new Codex builds today, but the two Rust alpha drops from Friday (alpha.9, alpha.11) are worth watching. The rewrite is clearly in the \"ship fast, iterate faster\" phase.\n\n**Week in review:** Claude Code shipped 4 versions. Codex shipped 2 alpha builds. The velocity gap tells you where the maturity curve is — but Codex's Rust foundation could close it fast once stable lands.\n\n## 💡 Tip of the Day\n\nSunday recap — the three most impactful things from this week that you should actually configure before Monday:\n\n```bash\n# 1. Enable fail-closed remote settings (v2.1.92)\n# In your org policy config:\n# \"forceRemoteSettingsRefresh\": true\n\n# 2. Check your per-model cost attribution\nclaude /cost\n# If Opus is dominating, route classification/triage\n# tasks to Haiku\n\n# 3. Try the interactive release notes picker\nclaude /release-notes\n# See exactly what changed between your current\n# version and any other\n```\n\nBonus: if you're a plugin author, v2.1.91's executable shipping support means you can bundle Rust/Go/C binaries with your plugin. The performance ceiling for Claude Code plugins just disappeared.\n\n## ⚖️ Legal × AI Watch\n\n### Open Source AI and the EU AI Act — What's Exempt, What's Not\n\nThe EU AI Act has a carve-out for open-source AI — but it's narrower than most people think, and the details matter enormously.\n\n**What the Act says:**\n\n- **Free and open-source AI models** are generally exempt from most GPAI obligations (documentation, transparency, copyright compliance) — *unless* they pose systemic risk.\n- **Systemic risk threshold:** If your open-source model has more than 10^25 FLOPs of training compute (or the Commission designates it), the exemption evaporates. You get the full GPAI obligations regardless of license.\n- **The deployer isn't exempt.** Even if the model itself benefits from the open-source exemption, whoever *deploys* it in a high-risk use case still bears all the deployer obligations under the Act.\n\n**The practical implications:**\n\n1. **Small open-source models** (most of them) — largely exempt from provider obligations. Release your 7B parameter legal reasoning model without writing a 200-page technical doc.\n2. **Frontier open-source models** (Llama-scale and above) — probably above the compute threshold. Open-source license doesn't save you from transparency obligations.\n3. **Everyone who deploys open-source models in production** — you're not exempt. You still need risk assessments, human oversight, and compliance documentation for high-risk deployments.\n\n**The open question:** What counts as \"free and open source\"? The Act references the existing open-source definitions, but the AI community's debate over \"open weights vs. truly open source\" (training data, training code, evaluation harnesses) remains unresolved. Meta's Llama license, for instance, has commercial restrictions that might disqualify it from the exemption entirely.\n\n**Bottom line:** Open source is a development methodology advantage under the Act, not a compliance get-out-of-jail-free card. If you're deploying open-source AI in regulated contexts, you still need the same rigor as proprietary deployments.\n\n## 📚 Fresh Papers\n\n- 📄 [**Bridging National and International Legal Data: Two Projects Based on the Japanese Legal Standard XML Schema**](https://arxiv.org/abs/2603.15094) — Nakamura et al. Computational comparative law using standardized legal XML. Infrastructure work that makes cross-jurisdictional legal AI possible.\n\n- 📄 [**CVPD at QIAS 2026: RAG-Guided LLM Reasoning for Islamic Inheritance Share Computation**](https://arxiv.org/abs/2603.24012) — Swaileh et al. Multi-stage legal reasoning for inheritance law using RAG. A fascinating intersection of religious law, formal logic, and LLMs.\n\n- 📄 [**Xpertbench: Expert Level Tasks with Rubrics-Based Evaluation**](https://arxiv.org/abs/2604.02368) — Liu et al. As LLMs plateau on conventional benchmarks, this paper pushes evaluation to expert-level tasks including legal reasoning. The bar is rising.\n\n- 📄 [**Characterizing Delusional Spirals through Human-LLM Chat Logs**](https://arxiv.org/abs/2603.16567) — Moore et al. When LLM conversations go wrong — analyzing negative psychological patterns in chat logs. Important safety research with regulatory implications.\n\n- 📄 [**Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability**](https://arxiv.org/abs/2604.04020) — Kurra et al. A causal approach to understanding *why* LLMs hallucinate, not just detecting that they do. Foundational work for trustworthy legal AI.\n\n## 🔥 Trending Repos\n\n- ⚔️ [**x1xhlol/better-clawd**](https://github.com/x1xhlol/better-clawd) — \"Claude Code, but better.\" OpenAI/OpenRouter support, no telemetry, no lock-in. 375 stars. The community fork energy is strong this week.\n\n- 📝 [**clawplays/ospec**](https://github.com/clawplays/ospec) — Document-driven AI development for coding assistants. 343 stars. Spec-first development meets AI-first tooling.\n\n- 📸 [**chencore/deep-live-cam-tutorial**](https://github.com/chencore/deep-live-cam-tutorial) — Deep-Live-Cam installation and usage tutorial for AI face-swapping. 80 stars. The legal implications of this one write themselves.\n\n## 🎙️ Standup One-Liner\n\n> \"Wrapped the week with 4 Claude Code releases, 2 Codex alphas, a deep dive into open-source AI regulation, and the realization that 'open source' under EU law means something very specific — and it probably doesn't include your favorite model's license.\"\n\n---\n*Generated by [Lawful AI](https://github.com/laugustyniak/lawful-ai) 🦞 — daily AI engineering intelligence with a legal edge.*\n*Curated by [@laugustyniak](https://github.com/laugustyniak) — because someone has to read the regulations so you don't have to.*",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/06"
    },
    {
      "date": "2026-04-05",
      "title": "AI Intelligence Brief — April 5, 2026",
      "tags": [
        "claude-code",
        "codex",
        "legal-ai",
        "arxiv",
        "trending"
      ],
      "summary": "🧠 AI Intelligence Brief — April 5, 2026\n\n*Where law meets code meets caffeine ☕*\n\n🔧 Tool Updates\n\nClaude Code\n\n📭 No release today. After yesterday's v2.1.92 blockbuster (fail-closed remote settings, Bedrock wizard, 60% faster diffs), a rest day is earned. Go explore the per-model cost breakdown",
      "content": "# 🧠 AI Intelligence Brief — April 5, 2026\n\n*Where law meets code meets caffeine ☕*\n\n## 🔧 Tool Updates\n\n### Claude Code\n\n📭 No release today. After yesterday's v2.1.92 blockbuster (fail-closed remote settings, Bedrock wizard, 60% faster diffs), a rest day is earned. Go explore the per-model cost breakdown if you haven't already — `/cost` is your new best friend.\n\n### Codex\n\n📭 No new Codex releases today, but yesterday's alpha.9 and alpha.11 Rust builds are worth compiling if you're tracking the rewrite. The alpha cadence suggests a stable release isn't far off.\n\n## 💡 Tip of the Day\n\nWeekend project idea: set up the `forceRemoteSettingsRefresh` policy from v2.1.92 in a staging environment before Monday rolls around.\n\n```json\n// In your organization's Claude Code policy config:\n{\n  \"forceRemoteSettingsRefresh\": true\n}\n```\n\nWhen enabled, Claude Code will **fail closed** if it can't reach the remote settings server. This means:\n- No stale configs running in production\n- No \"we didn't know the policy changed\" incidents\n- A clear signal when network issues affect your AI tooling\n\nPair this with monitoring on the settings endpoint, and you've got an enterprise-grade control plane for AI developer tools. Your CISO will buy you lunch.\n\n## ⚖️ Legal × AI Watch\n\n### AI Liability — Who's Responsible When the Agent Breaks Prod?\n\nThe AI liability question just got a lot more pressing now that AI agents can execute code, call APIs, modify databases, and — if you're bold — deploy to production.\n\n**The liability stack, as it's shaping up:**\n\n1. **The AI provider** — responsible for the model behaving as documented. If the model hallucinates a `DROP TABLE` when asked to \"clean up the data,\" that's arguably a model defect.\n2. **The tool/platform builder** — responsible for guardrails, sandboxing, and access controls. If your agent framework lets an LLM execute arbitrary SQL in production without confirmation... that's on you.\n3. **The deploying organization** — responsible for appropriate use, monitoring, and human oversight. \"We let the AI do it\" is not a defense when you had no review process.\n4. **The individual developer?** — this is where it gets murky. If a developer prompts an AI agent to \"fix the performance issue\" and it drops an index that takes down the service, is that developer negligence?\n\n**The EU's proposed AI Liability Directive** would create a presumption of causality: if an AI system causes harm and the provider/deployer violated their obligations under the AI Act, the burden of proof shifts. You don't have to prove the AI *caused* the harm — just that obligations were breached and harm occurred.\n\n**For engineering teams, the practical takeaway:**\n\n- Implement confirmation gates for destructive operations\n- Log *everything* — prompts, tool calls, model responses, human approvals\n- Define clear ownership boundaries in your agent architectures\n- Treat AI agent permissions like you treat IAM roles: least privilege, always\n\nThe days of \"move fast and break things\" with AI agents are numbered. Move fast, but keep receipts.\n\n## 📚 Fresh Papers\n\n- 📄 [**EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models**](https://arxiv.org/abs/2603.19532) — Tamo et al. Using reinforcement learning to make LLMs ground their answers in evidence. Directly relevant to legal applications where hallucination is a liability.\n\n- 📄 [**Lightweight Query Routing for Adaptive RAG**](https://arxiv.org/abs/2604.03455) — Bansal et al. Not all queries need the same retrieval strategy. This paper routes queries to different RAG pipelines based on complexity — saving tokens and improving accuracy.\n\n- 📄 [**Adaptive Chunking: Optimizing Chunking-Method Selection for RAG**](https://arxiv.org/abs/2603.25333) — de Moura Junior et al. How you chunk your documents matters more than which embedding model you use. This paper proves it empirically.\n\n- 📄 [**ESG-Bench: Benchmarking Long-Context ESG Reports for Hallucination Mitigation**](https://arxiv.org/abs/2603.13154) — Sun et al. Long-form corporate reporting meets hallucination testing. Essential reading if you're building AI for compliance or financial analysis.\n\n## 🔥 Trending Repos\n\n- 🎙️ [**zarazhangrui/personalized-podcast**](https://github.com/zarazhangrui/personalized-podcast) — Turn any content into a personalized AI podcast. NotebookLM-style but you control the script. 213 stars.\n\n- 🏰 [**ThinkWatchProject/ThinkWatch**](https://github.com/ThinkWatchProject/ThinkWatch) — Enterprise AI bastion host for secure API and MCP access with RBAC and audit logs. 123 stars. The enterprise security layer AI tools need.\n\n- 🧠 [**NicholasSpisak/second-brain**](https://github.com/NicholasSpisak/second-brain) — LLM-maintained personal knowledge base for Obsidian, based on Karpathy's LLM Wiki pattern. 75 stars. Your second brain, maintained by a third brain.\n\n## 🎙️ Standup One-Liner\n\n> \"No releases today so I set up fail-closed remote settings, read about who's liable when AI agents go rogue, and realized my agent's IAM policy is more permissive than my intern's was. Monday me has some work to do.\"\n\n---\n*Generated by [Lawful AI](https://github.com/laugustyniak/lawful-ai) 🦞 — daily AI engineering intelligence with a legal edge.*\n*Curated by [@laugustyniak](https://github.com/laugustyniak) — because someone has to read the regulations so you don't have to.*",
      "url": "https://laugustyniak.github.io/lawful-ai/2026/04/05"
    }
  ]
}
