AHTVCH-07
LIVE
UTC
POLICY EU Parliament approves stricter AI transparencySAFETY Memory degradation found in long-term BCI usersAI Major lab admits training on private health recordsPOLICY EU Parliament approves stricter AI transparencySAFETY Memory degradation found in long-term BCI users

Command Palette

Search for a command to run...

Back to Archive
AI Agentshigh risk

Rabbit R1: The "Do It For Me" Device and the Price of Delegation

In January 2024, rabbit launched the r1—a $199 device built around a Large Action Model (LAM) that promises to use your apps for you. It's the shift from 'AI that answers' to 'AI that acts.' But when AI can do things as you, delegation becomes a security event. The real danger isn't hallucination—it's invisible action.

January 9, 202414 min readAHTV Desk
#ai-agents#action-models#automation#security#accountability
DISPATCH — JANUARY 2024 RABBIT R1: THE "DO IT FOR ME" DEVICE AND THE PRICE OF DELEGATION. TL;DR In January 2024, rabbit launched the r1—a $199 pocket device running "rabbit OS," built around a Large Action Model (LAM) that can navigate your apps and carry out tasks on your behalf. The pitch is seductive: tell AI what you want, and it will do the steps. But when AI can act as you inside your services, you've created a new attack surface and a new accountability problem. The question shifts from "Can this work?" to "What happens when it fails?" — 1) WHAT LAUNCHED On January 9, 2024, rabbit announced the r1: - Price: $199 - Pre-sales: started January 9, 2024 - Shipping: late March 2024 (U.S.), global shipping later - Core idea: a standalone handheld device running rabbit OS, built on a Large Action Model (LAM) that learns how to use apps by demonstration and can replicate actions across interfaces - Login hub: a web portal called "rabbithole" where users connect existing services - Privacy posture: no always-listening; push-to-talk mic activation; camera physically blocked by default; claims of not storing third-party credentials This is not an API integration strategy. This is not a traditional partnership model. This is: AI learns to use your apps the way you do, then does the work when you ask. That distinction matters. — 2) THE REAL TREND: "APP-FREE" DOESN'T MEAN "PERMISSION-FREE" Here's the uncomfortable truth: If AI can use your apps, then AI needs the same power you have. rabbit didn't build official integrations with Spotify, Uber, or other services. Instead: - The LAM is trained by humans interacting with apps so it learns button patterns, menus, confirmations, and flows. - You connect your services via rabbithole. - Then the system can act "as you" inside those services. This felt exciting in January 2024 because it sounded like the end of app juggling. But here's why it's ethically intense: "AI that can do things for you" is always one step away from "AI that can do things as you." And delegation is not a feature. Delegation is a security event. — 3) WHY THIS CATEGORY IS RISKIER THAN "CHATBOTS" A chatbot can mislead you. An action-model can mis-spend you. When the system moves from words to actions, the failure modes shift: A) A MISTAKE BECOMES A TRANSACTION Wrong address. Wrong item. Wrong time. Wrong recipient. Wrong permissions. With a normal app, you are doing the clicking. The friction forces awareness. With an agent, the friction disappears and your "yes" becomes implicit. The ethical demand here is not "high accuracy." It's high verification. B) "APP-FREE" CAN HIDE WHO'S ACCOUNTABLE If something goes wrong, who is responsible? - The app? (They didn't build the agent.) - The agent company? (They didn't own the service.) - The user? (They didn't perform the steps.) - The model? (It can't be held responsible.) When responsibility becomes blurry, harm becomes easier. C) THE SYSTEM NEEDS ACCESS TO THE PARTS OF YOUR DIGITAL LIFE YOU GUARD MOST In practice, an "agent" needs: - accounts - logins - session tokens - payment pathways - identity-linked services rabbit's launch post frames rabbithole like handing your unlocked phone to a trusted friend to order food—with permission, without storing passwords. Even if done carefully, the category still creates a new attack surface. If agents become normal, more companies will build them badly. And "built badly" is where users get hurt. — 4) RABBIT'S PRIVACY CLAIMS ARE GOOD—BUT THEY ALSO REVEAL THE CORE TENSION rabbit tried to neutralize the fear that "a camera + mic + assistant" becomes surveillance hardware: - No always-listening mode; push-to-talk for mic activation - Camera lens physically blocked by default unless explicitly used - Claims of not storing third-party credentials - Authentication happens on the destination service's login system - Users can link/unlink services Those are meaningful guardrails. But here's the deeper issue: The product's entire purpose is delegation. Delegation always feels like surrender unless the system makes control visible. So the ethical bar is higher than "we have a privacy policy." The bar is: - Can the user clearly see what the agent is about to do? - Can the user stop it easily? - Can the user review what happened? - Can the user limit what the agent is allowed to do? If the answer is "maybe," the device becomes a trust experiment. — 5) WHO THIS HELPS VS WHO IT PRESSURES (ACCOUNTABILITY BOX) HELPS - People overwhelmed by app friction and repetitive tasks - Users who need voice-first workflows (accessibility upside) - Anyone who wants a single interface to control multiple services PRESSURES - Users who don't understand how much access an "action AI" requires - People in shared environments if cameras/mics become normalized "because it's AI" - The broader public, as "agent behavior" becomes socially acceptable before society agrees on rules QUIET WINNERS - Platforms that benefit when content, purchases, and engagement become easier to generate - Any company that learns to monetize "AI doing things" without taking responsibility for outcomes QUIET LOSERS - Users who will be blamed for agent mistakes ("you asked it to do it") - People who can't opt out of a world where agents act faster than human consent — 6) A SIMPLE TEST FOR ETHICAL AGENT HARDWARE If an AI device can act inside your apps, ask one question: Does it protect you from your own delegation? That means: - Clear confirmations for irreversible actions - Friction where money, identity, or safety is involved - Transparent logs of what it did - Granular permission controls ("this agent can book rides but not pay bills") - Easy off-switches and unlinking Because in the agent era, the most dangerous bug is not hallucination. The most dangerous bug is invisible action. No hype. Just consequences.

What Changed

This dispatch covers emerging developments in ai agents with implications for augmentation technology policy and safety.

Why It Matters

Understanding these developments is crucial for informed decision-making about human augmentation technologies and their societal impact.

Sources

  • rabbit newsroom (Jan 9, 2024): introducing r1, a pocket companion that moves AI from words to action
  • The Verge (Jan 9, 2024): The Rabbit R1 is an AI-powered gadget that can use your apps for you
  • TechCrunch (Jan 19, 2024): Forget Apple Vision Pro — rabbit r1 is 2024's most exciting launch yet
Stay Informed

Subscribe to the Dispatch

Get notified when we publish new dispatches on augmentation ethics, safety, and policy.

Related Dispatches

Built with v0
AHTV | Augmented Human TV