Roboticshigh risk
Figure + OpenAI: Humanoids Get a Brain (and Society Gets a New Liability)
Figure raised $675M and partnered with OpenAI to power humanoid robots with language models. The moment robots can reason is when mistakes become kinetic, accountability blurs, and 'labor shortage' becomes cover for control. What we demand now determines what we tolerate later.
February 29, 202414 min readAHTV Desk
#robotics#humanoids#ai-safety#embodied-ai#labor-policy#accountability
DISPATCH — FEBRUARY 2024
FIGURE + OPENAI: HUMANOIDS GET A BRAIN (AND SOCIETY GETS A NEW LIABILITY)
TL;DR
Figure announced $675M in funding at a $2.6B valuation and partnered with OpenAI to develop AI models for general-purpose humanoid robots. This is not a robot news story. It's a governance story. The moment a language model controls a physical body, mistakes stop being bugs and start being collisions. Society needs rules before humanoids scale.
—
1) WHAT EXACTLY WAS ANNOUNCED
On February 29, 2024, robotics startup Figure announced two things at once:
- Raised $675M in Series B funding at a $2.6B valuation, backed by Microsoft, NVIDIA, Jeff Bezos (via Bezos Expeditions), OpenAI Startup Fund, and others
- Signed a collaboration agreement with OpenAI to develop next-generation AI models for humanoid robots
This is the classic big-tech triangle forming:
- Robotics hardware company (Figure)
- Frontier model lab (OpenAI)
- Cloud + infrastructure platform (Microsoft Azure)
The moment this triangle forms, it stops being a hobby industry.
—
2) THE REAL TREND: ROBOTS ARE BECOMING SOFTWARE-DEFINED
A humanoid is not valuable because it looks human.
It's valuable because it can operate in human-built environments: stairs, door handles, shelves, factories designed for people, tools made for hands.
But hardware alone is not the bottleneck anymore.
The bottleneck is "intelligence that can turn instructions into actions."
That's why the Reuters reporting was so revealing: Figure's CEO described the plan as building AI models based on OpenAI's latest GPT models, trained on robot actions data that Figure collects. The goal is for robots to talk with people, see things, and do physical tasks.
That is the leap from:
robot = pre-programmed motion
to
robot = action interpreter
If this works, we get a new class of machine:
Not just "autonomous navigation," but "autonomous participation."
—
3) WHY THIS RAISES THE ETHICAL STAKES MORE THAN MOST AI ANNOUNCEMENTS
Text-only AI can mislead you.
Embodied AI can hurt you.
The moment a model controls a physical system, new risks appear:
A) MISTAKES BECOME KINETIC
A "wrong answer" is annoying.
A "wrong movement" can break property, injure a worker, or trigger a safety incident.
So the metric can't be just "accuracy."
It must be:
- safe behavior under uncertainty
- graceful failure modes
- predictable escalation rules
- human override that actually works in practice
B) ACCOUNTABILITY BECOMES BLURRY
If a humanoid causes damage, who is responsible?
- the manufacturer?
- the AI model provider?
- the company deploying it?
- the operator who gave the command?
- the dataset and training pipeline?
This is a legal vacuum zone.
And vacuum zones fill up with the weakest outcome: victims fighting paperwork while companies blame each other.
C) SURVEILLANCE CAN BECOME "NORMAL" INSIDE WORKPLACES
Humanoids need sensors to function: cameras, depth sensing, microphones, spatial mapping.
Even if the original intent is safety and navigation, the same sensory stack can be used for monitoring:
- worker productivity
- "time on task"
- compliance behaviors
- who went where and when
The line between "robot perception" and "workplace surveillance" is thin.
And history says that companies cross thin lines when incentives exist.
D) LABOR DISPLACEMENT WILL BE FRAMED AS "LABOR SHORTAGE"
Figure explicitly mentioned labor shortages and undesirable/unsafe jobs.
That framing is not wrong.
But it's incomplete.
Because "humanoid robots in the workforce" will not only fill gaps.
It will also reshape bargaining power.
Even if robots start with dangerous jobs, the incentive is always to expand the scope.
The ethical question becomes:
Do humans get safer, better work—or do they get cheaper, more controlled work?
—
4) WHY OPENAI'S INVOLVEMENT MATTERS SPECIFICALLY
There are many robotics labs.
There are many AI labs.
But a frontier model lab collaborating directly with a humanoid company signals a bet:
general-purpose models can become general-purpose bodies.
OpenAI's involvement signals that the company sees multimodal models (language + vision + action) as the infrastructure for embodied AI.
Multimodal matters here:
- language for commands and context
- vision for scene understanding
- action policies for movement and manipulation
This is not "ChatGPT in a robot."
This is the idea of a model that can connect: words → perception → motion.
That pipeline is exactly where ethics lives.
Because whoever controls that pipeline controls:
- what the robot is allowed to do
- how it interprets a human's intent
- what "unsafe" looks like
- what the robot will refuse
—
5) WHO BENEFITS VS WHO PAYS (ACCOUNTABILITY BOX)
MOST DIRECTLY HELPED (IF IT WORKS)
- Humans in dangerous or physically demanding jobs who gain safety and autonomy.
- Companies that deploy robots and reduce labor costs and improve reliability.
MOST EXPOSED TO RISK (FIRST)
- Workers in roles targeted for automation who lose bargaining power or jobs.
- Bystanders around robots who lack consent mechanisms for surveillance.
QUIET WINNERS
- The companies and investors who capture the market for embodied AI.
- Model labs that become infrastructure for robotics.
QUIET LOSERS
- Workers in industries that don't plan for retraining and displacement.
- People who normalize workplace surveillance as the cost of robot coexistence.
—
6) WHAT WE SHOULD DEMAND BEFORE HUMANOIDS SCALE
A) REAL SAFETY STANDARDS (NOT DEMO VIDEOS)
Independent safety certification.
Transparent incident reporting.
Clear rules for what happens when something goes wrong.
B) "BYSTANDER RIGHTS" AND WORKPLACE BOUNDARIES
If robots have cameras and microphones, the people around them deserve:
- clear indicators
- clear policies
- strict limits on secondary use (surveillance creep)
C) ACCOUNTABILITY YOU CAN NAME
If harm happens, the victim should not have to guess who to contact.
There must be a single accountable party, legally and operationally.
D) AUDITABILITY AND LOGS
If the robot acted, we should be able to reconstruct:
- what command it received
- what it perceived
- what decision it made
- why it chose that action
Otherwise, "the model did it" becomes an excuse forever.
E) A LABOR POLICY CONVERSATION THAT ISN'T PR
"Labor shortage" can be real.
So can displacement.
The ethical approach is not denial.
It's planning: retraining, worker protections, and shared upside.
—
7) THE AUGMENTED HUMAN TEST
A humanoid robot is not a product.
It's a new participant in society.
The question isn't "can humanoids work?"
They will.
The question is:
Will humans be safer and freer around them—or just more replaceable and more monitored?
Because if we get this wrong, the future won't look like sci-fi.
It will look like a warehouse with perfect compliance and zero dignity.
No hype. Just consequences.
What Changed
This dispatch covers emerging developments in robotics with implications for augmentation technology policy and safety.
Why It Matters
Understanding these developments is crucial for informed decision-making about human augmentation technologies and their societal impact.
Sources
- Figure press release (Feb 29, 2024): $675M raise + OpenAI collaboration + Azure
- Reuters (Feb 29, 2024): funding round + valuation + OpenAI collaboration
- TechCrunch (Feb 29, 2024): humanoid hype wave + investor list
- Axios (Feb 29, 2024): round summary and market context
Stay Informed
Subscribe to the Dispatch
Get notified when we publish new dispatches on augmentation ethics, safety, and policy.