What are the responsibilities and job description for the Staff AI Engineer position at Career Renew?
building the Hyperliquid Agent Runtime. Senpi agents make real trades with real money 24/7, generating a continuous stream of decisions and outcomes across dozens of concurrent strategies.
Today, our agents are effective but independent. Each one runs its own logic, and when one discovers a winning pattern, a human has to manually propagate that insight across the fleet. We’re hiring a Staff AI Engineer to make that process autonomous: build the intelligence layer where the fleet learns from itself and gets smarter with every trade.
This is a production role, not a research role. The feedback loop is immediate — your work either makes the agents more money or it doesn’t. Every trade is a measurable outcome.
What You’ll Own
Learning & Optimization
The fleet generates thousands of trading decisions per day, each with a measurable outcome. You’ll build the systems that turn this stream into compounding intelligence:
-
Design and implement the feedback loop that connects trade outcomes back to strategy improvement — signal selection, risk parameters, position sizing, and timing
-
Build the evaluation framework that quantifies which signals, market conditions, and agent configurations actually predict profitable trades versus which ones are noise
-
Develop automated strategy generation and testing — the system should explore new configurations, backtest them against real fleet data, and surface candidates for deployment
-
Detect shifts in market conditions and adapt fleet behavior accordingly — what works in trending markets fails in choppy ones, and the system should recognize the difference
Autonomous Fleet Intelligence
Build the higher-order agents that manage and improve the fleet without human intervention:
-
Automated fleet monitoring that catches configuration errors, degraded performance, and infrastructure issues across all agents continuously
-
Performance attribution that decomposes every trade into its component drivers — was the signal right, was the execution efficient, was the exit well-timed — and feeds those insights back into strategy design
-
Fleet coordination that manages concentration risk, capital allocation across strategies, and the balance between exploration (testing new approaches) and exploitation (scaling what works)
Model & Inference
Own the path from external LLM dependence to Senpi-controlled intelligence:
-
Evaluate and implement the right model hosting strategy — from proxied external models with full telemetry, to fine-tuned domain-specific models on owned infrastructure
-
Build the telemetry and data capture layer that makes learning possible — every decision, every evaluation, every outcome structured and queryable
-
Determine whether and how domain-specific training (on trading data, market patterns, and fleet performance) outperforms general-purpose prompted models — then build the pipeline to make it happen
-
Optimize inference for the specific demands of autonomous trading: many concurrent agents, structured decision outputs, cost-efficient at scale
What We’re Looking For
Must Have
-
ML engineering in production — you’ve trained, deployed, and maintained models that run in production and directly impact business outcomes. Shipped systems, not just notebooks
-
Reinforcement learning or online learning experience — you understand the practical challenges of learning from real-world outcomes rather than static datasets. You’ve built systems where model outputs generate actions that generate feedback that improves the model
-
Strong software engineering — Python is your primary language, comfortable with Go or TypeScript for production services. You build data pipelines and distributed systems, not just models
-
You’ve closed the loop — the single most important qualification. You’ve built a system where predictions lead to actions that generate outcomes that feed back into better predictions. End-to-end, in production, with measurable improvement over time
Strong Plus
-
Experience with financial ML — signal generation, alpha research, portfolio optimization, or execution optimization
-
LLM fine-tuning and serving — PEFT/LoRA, vLLM, TGI, or custom inference pipelines in production
-
Multi-agent systems — designing systems where autonomous agents coordinate, compete, or learn from each other
-
Onchain data or DeFi protocol experience
-
Background in domains where agents make sequential decisions under uncertainty — robotics, autonomous systems, game AI
What This Role Is Not
This is not an ML research role where you publish papers and hand off models to an engineering team. You own the full stack from data pipeline to deployed model to production outcome.
This is also not a prompt engineering role. While today’s agents use prompted LLMs, the trajectory is toward learned behavior — agents that improve through experience, not through better instructions.
Compensation & Package
Compensation
-
Total starting all-in comp: ~$450k
-
Base salary: $175,000–$250,000 USD (location and experience dependent)
-
Equity: ~1% initial stock grant, valued at $230,000 in last round, projected to double in next 6 months
-
-
Plus: Team-wide eligibility for salary increases and bonuses tied to revenue and usage
-
Plus: Token upside: pro-rata participation in Senpi’s token launch (planned for 2026)
This role is meaningfully ownership-driven, with upside tied directly to company success.
Salary : $175,000 - $250,000