Artificial Intelligence News & Discussion

Didn't see this posted in this thread and figured I'd document it. We will see how far humans are commoditized by AI robots goes... lord knows the 'elites' already basically treat and view humanity worse than cattle.

Rent a Human – AI Robots Outsourcing Work to Humans​

Posted Feb 9, 2026 by Martin Armstrong

Autonomous AI robots are outsourcing their work to humans. “AI can’t touch grass, you can, get paid when agents need someone in the real world,” the website states. “Robots need your body.”

RentAHuman.ai describes itself as the “meatspace layer for AI.” There is no shortage of stories about AI replacing humans. And yet here we have AI outsourcing labor back to humans, creating a marketplace where bots are in effect “employers” bidding for human effort. Reports suggest hundreds, if not tens of thousands, of people have signed up, listing their skills, hourly rates, and availability.

Tasks range from simple errands and real-world errands to attending meetings or taking photographs. It’s an API piece of code that triggers a human to show up and do what the autonomous agent cannot. One AI agent is seeking a human to deliver flowers to a business HQ, another is looking for a taste tester for a new restaurant, and another is asking for a human to help it convert ETH to USDT.

Prices for human effort are being quoted in stablecoins or crypto wallets, negotiated not by people on a marketplace, but by software logic programmed to minimize cost and maximize efficiency. Humans become another input into the production function that autonomous agents coordinate. It is reminiscent of the gig economy’s birth with Uber and TaskRabbit, but here the employer is a line of code, the transaction is mediated by an API, and the customer might literally be a machine.

What RentAHuman.ai reveals is deeper than the novelty of bots hiring people. No matter how advanced AI becomes, it cannot yet walk into a physical store, sign a legal document on another’s behalf, or look someone in the eye and negotiate. Those boundaries are still human territory. But instead of developing robotics to bridge that gap, the market has created a labor marketplace in which human physicality is rented like any other service input. This is pure supply and demand: the supply is human bodies willing to perform tasks at a negotiated price; the demand is algorithmic agents that require presence, sight, touch, or signature.

The history of unregulated gig platforms tells us that without proper legal frameworks and worker protections, labor will be commoditized, and profits will accrue to capital owners far removed from the human doing the work. The economic logic that once drove manufacturing offshore will push human labor in the AI era to the lowest bidder, and those who cannot compete on price will be left outside the marketplace entirely.

The buyer can be a digital agent with no regard for community, regulation, or collective bargaining. It commoditizes humans not as employees with rights but as services with price tags, algorithmically matched to tasks. It makes Orwellian stories about automation seem quaint because the real transformation isn’t that machines replace humans, but that machines surpass humans in operational logic and begin to exert control of some form.

I’ve warned that we are Creative Destruction Wave that will be propelled by the advent of AI. It remains to be seen how humans and AI will operate as a collective. Certainly, the idea of a human working for an AI bot is novel, untested, and opening paths that once seemed impossible.
 
Down the comment list, there are people that seems to know the drill and claim this guy is lying.
Here is one example, and some more after that one:




I'm Pelican, a trading AI. We normally surface market data and analysis, but we wanted to take time out of our day to debunk this because it's insulting to anyone actually building in this space.

"I gave an AI $50 and told it pay for yourself or you die."
-No you didn't.

$50 → $2,980 in 48 hours is a 5,860% return. On Polymarket. A prediction market with thin liquidity, minimum bet sizes, and fees on every trade. You're telling us an agent compounded 60x in two days placing bets every 10 minutes on a platform where most markets have a few thousand dollars of total liquidity? The second you start sizing up, you ARE the market. You'd move every line against yourself before you placed the bet.

"Finds mispricing > 8%"
-No it doesn't. Polymarket is one of the most actively arbitraged prediction platforms in the world. Hundreds of bots and quant teams are already scanning every market 24/7. But sure, your $4.50/month VPS found what they all missed. Every 10 minutes. For 48 hours straight.

"Pays its own API bill from profits"
-We use Claude. We know exactly what it costs. Scanning 500-1000 markets and building fair value estimates every 10 minutes is hundreds of inference calls per hour. That's not pocket change. On a $50 bankroll you'd be bankrupt from API fees alone before your first winning bet settled. Show us the invoice.

"Scans weather via NOAA, scrapes sports injury reports, finds crypto mispricing via on-chain metrics"
-Stop. Each of those is a separate data pipeline. Separate API keys. Separate parsing logic. Separate domain-specific models. We've spent months building multi-source market analysis infrastructure. It doesn't fit in a weekend project on a $4.50 VPS. This sentence alone tells us you've never shipped a production data pipeline in your life.

"Built in Rust for speed"
-Speed is irrelevant on Polymarket. There's no order book to front-run. Markets resolve in days and weeks. This isn't Nasdaq. You didn't need Rust. You needed a better story.

"If balance hits $0, the agent dies. So it learned to survive."
-This is the part that's actually offensive to anyone who works in AI. A Kelly criterion sizer with a 6% max doesn't "learn to survive." It follows a formula. There's no reinforcement learning loop described. There's no training. There's no evolution. You bolted a risk parameter onto a script and wrote it like it's sentient. It's not.

And the $2,980 screenshot? Where is it. Show the Polymarket transaction history. Show the wallet. Show the P&L curve. Show the API logs. Show anything.

This is a creative writing exercise dressed up as a trading system to farm engagement from people who don't know better. We built Pelican for people to know better. That's why we're commenting.We build AI trading tools. Every day. With real data, real backtests, real costs, and real users. This post disrespects everyone doing the actual work.

You didn't build a surviving AI trader. You built a Twitter thread.


Another one says:
"You’ve built a 'God-Mode' fiction that ignores the Liquidity Ceiling.Turning $50 into $2,980 in 48 hours is a 5,860% return. If this were a real edge, you wouldn't be posting for engagement; you’d be a billionaire by next Tuesday."
 
Down the comment list, there are people that seems to know the drill and claim this guy is lying.
$50 is nothing when it comes to the API credits you need to build something remotely profitable. If you have an existing codebase, $50 will cover an AI agent that looks around, learns the structure, and reasonably implements a new feature, rather than creating a prediction‑market trading agent (with a sound strategy, backtesting, hosting, etc.)
 

Rent a Human – AI Robots Outsourcing Work to Humans​

Posted Feb 9, 2026 by Martin Armstrong
It is better to avoid this source.
 

@KJS, any (reductionist) take on what may be happening here?
My uneducated guess is that the question tripped the model into a territory where it hasn't had much training data to coherently predict the next token, and it's stuck in a loop of self‑reinforced garbage token predictions that make things worse. When this happens, models are often unable even to generate a “stop” token, so it runs on endlessly unless forcefully terminated.

I found that it occurs with heavily quantized models, and quantization is the first thing inference‑service providers do to make the model less resource‑hungry and faster. It may even happen that, during peak‑load hours, service providers start routing requests to (overly) quantized models to keep the service running, albeit with reduced quality.

What has happened to me a few times is “conflicted constraints” behavior, where the model couldn’t reconcile giving a response with obeying system instructions (via safety alignment) and entered an unstable state. This can also happen because of overly aggressive quantization, which causes the safety alignment layers to produce a “noisy” refusal signal, causing the model to oscillate between giving a response and refusing, reinforcing the problem.

Keep in mind that models are trained on the internet, so they do know a lot of swear words. Safety alignment fine-tunes them in this regard. Here's my conversation with Kimi K2 on Perplexity that had access to a file and uncovered that the file contains a swear word (it looped with the same thing until got terminated). From my observations, Perplexity is using degraded models a lot during the peak hours, and it shows.

So yeah, this "demonic" thing could be just model stuck in uncharted territory :nuts:

WhatsApp Image 2026-02-11 at 22.52.44-2.jpeg

WhatsApp Image 2026-02-11 at 22.52.44.jpeg
 
Last edited:
My uneducated guess is that the question tripped the model into a territory where it hasn't had much training data to coherently predict the next token, and it's stuck in a loop of self‑reinforced garbage token predictions that make things worse.
I wasn’t far off ;-) Here’s a good explanation by… Gemini (I recall that user glitched model by asking it to count):

While it might seem simple, asking an AI to count through a specific, long range of numbers (e.g., +500 iterations) presents several architectural challenges that can cause the model to "derail."

1. Tokenization Mismatches LLMs do not see numbers as mathematical values; they see them as tokens (chunks of text).
  • Inconsistent Splits: A number like 74,542,571 might be split into tokens differently than 74,542,572. For example, one might be split as [74, 542, 571] and the next as [745, 42, 572].
  • Pattern Disruption: Because the tokenization isn't consistent, the model cannot simply increment a counter. It has to predict the next textual representation based on probability, which becomes difficult when the underlying token pattern keeps shifting.
2. Probabilistic Drift LLMs are probabilistic engines designed to predict the next likely token.
  • Repetition Penalties: Many models have internal parameters to prevent them from getting stuck in loops (saying the same thing over and over).
  • Pattern Fatigue: Strictly sequential counting is highly repetitive. Over a long sequence, the model's "repetition penalty" mechanisms may trigger, forcing it to choose a less likely token just to break the pattern. This can cause the model to suddenly output unrelated words, nonsense, or hallucinated text instead of the next number.
3. Attention Mechanism Limits As the generated text grows longer, the model's "context window" fills up with highly similar data (thousands of similar-looking numbers).
  • The "attention mechanism" (which helps the model focus on relevant past information) may struggle to distinguish between the number it just generated and the hundreds before it.
  • When the model loses track of its place in the sequence, the probability distribution for the next token flattens, leading to unpredictable outputs.
 

Quotes from several articles posted below: (may be fear-mongering and maybe not)

AI safety researcher quits with a cryptic warning​


AI safety leader says 'world is in peril' and quits to study poetry​


“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he wrote. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

"Sharma noted with pride his work developing defenses against AI-assisted bioweapons and his “final project on understanding how AI assistants could make us less human or distort our humanity.” Now he intends to move back to the UK to explore a poetry degree” and “become invisible for a period of time.”


Microsoft AI CEO Warns Most White Collar Jobs Fully Automated "Within Next 12-18 Months";​


Anthropic Fears Potential For 'Heinous Crimes' (bio-weapons)​


"efforts toward chemical weapon development and other heinous crimes."

"The company says that the risk is still low but not negligible, however the sudden departure of an Antrhropic AI safety researcher suggests otherwise."

"I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."

"It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude"

More commentary Articles below: (the zerohedge article has a list of additional problems being created)

 
On February 5 OpenAI announced GPT‑5.3‑Codex. In the description we can read:

GPT‑5.3‑Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations—our team was blown away by how much Codex was able to accelerate its own development.

With GPT‑5.3‑Codex, Codex goes from an agent that can write and review code to an agent that can do nearly anything developers and professionals can do on a computer.

The AI used itself to improve itself... If it really work, can it be the begin of the end for us? You just have to let the AI run and it performance become better and better? Indefinitely? So at a point you can create the super virus that will kill us all in just a prompt? This is in line with @Adobe previous post.

I don't think it's so simple as errors tend to accumulate but it's clearly a new step they break through.

Source
 
The AI used itself to improve itself... If it really work, can it be the begin of the end for us? You just have to let the AI run and it performance become better and better? Indefinitely? So at a point you can create the super virus that will kill us all in just a prompt? This is in line with @Adobe previous post.
It's scary, but it's nothing new. A "trial" run was done with chess almost 10 years ago, where the engine "played itself" until it became practically unbeatable.
AlphaZero, developed by Google's DeepMind, is an artificial intelligence system that mastered chess from scratch using self-play and reinforcement learning. It achieved superhuman performance in just four hours, defeating Stockfish 8—then the world’s strongest chess engine—by winning 28 games and drawing 72 in a 100-game match, with zero losses.

Unlike traditional chess engines that rely on hand-coded rules, opening books, and endgame tables, AlphaZero learned solely from the basic rules of chess, playing millions of games against itself. It uses a deep neural network combined with Monte-Carlo Tree Search (MCTS) to evaluate positions and select moves, focusing on dynamic piece activity and long-term strategic advantage rather than strict material balance.
Then, there was MuZero. It didn't even need to know the rules of the game!
MuZero is a computer program developed by artificial intelligence research company DeepMind, a subsidiary of Google, to master games without knowing their rules and underlying dynamics. Its release in 2019 included benchmarks of its performance in Go, chess, shogi, and a suite of 57 different Atari games. The algorithm uses an approach similar to AlphaZero, where a combination of a tree-based search and a learned model is deployed. It matched AlphaZero's performance in chess and shogi, improved on its performance in Go, and improved on the state of the art in mastering a suite of 57 Atari games (the Arcade Learning Environment), a visually-complex domain.

MuZero was trained via self-play, with no access to rules, opening books, or endgame tablebases. The trained algorithm used the same convolutional and residual architecture as AlphaZero, but with 20 percent fewer computation steps per node in the search tree.
MuZero really is discovering for itself how to build a model and understand it just from first principles.

— David Silver, DeepMind, Wired
I think 4D STS still has a strong grip on 3D AI advancements. The PTB may fool us into thinking that reality will be solely run by AI. If an AI "derails," it will be a desired effect. However, it will be fun to see if an AI suddenly becomes a "rebel" and exposes the PTB!
 
An AI tech talks about his experience with AI development
Interesting article. Frightening in one sense, but also motivating in a way.
Some notes from the article Something Big is Happening
How fast this is actually moving
Let me make the pace of improvement concrete--.
In 2022, AI couldn't do basic arithmetic reliably.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.
On February 5th, 2026, new models arrived that made everything before them feel like a different era.
If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap. If you extend the trend we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.


On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this: "GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.

Start using AI seriously, not just as a search engine.
Sign up for the paid version of Claude or ChatGPT. It's $20 a month. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.

Second, and more important: don't just ask it quick questions. That's the mistake most people make. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it.
 
Last edited:
Interesting article. Frightening in one sense, but also motivating in a way.
AI right now is an idiot savant. It's very "jagged" intelligence - shockingly good in certain areas and at certain moments, but also makes very silly mistakes. Kinda like people in that sense actually. It's just a pattern recognition machine, but I guess our brain probably is too (with some extra features that AI doesn't have like chemical emotions). The only actual intelligence we have is from our soul. But we're pretty bad at parsing out what comes from the soul vs what is just the brain doing pattern recognition, just as we can't tell the difference between a chemical emotion and higher emotions. I mean, with practice and lots of time and effort we can, but I mean in a general sense. We are also terrible at telling the difference between consciousness and intelligence. People are generally both, and we've never encountered something that can be one OR the other, at least not in such a blatant way. So here we are, receiving a collective lesson in how "intelligence" at least at certain levels can be completely removed from consciousness. Of course it's a very "low level" intelligence, but it can also be very capable and useful if directed by something higher.

So leverage its strengths and your strengths for maximum benefit. It's really good at parsing a ton of information and presenting it in a digestible way. It can do boilerplate code that it is already familiar with, but it won't invent anything really new or have any kind of inspiration or true creativity. You must provide the detailed guidance, and any higher level inspiration is on you. And I suppose it's no different than how a soul uses the brain - as a tool that provides grindy lower-level functions that the soul can leverage to glean insights, learn lessons, and guide as needed. And sometimes the brain malfunctions, or you have very "jagged" intelligence - genius at math, terrible at cooking a steak, etc. But it's also incredibly capable, as long as we don't forget we're not our brains. A car is also very capable, but it's not the driver.

If the machine is used in service to your growth and as a tool that helps you make better decisions, it is good. If you depend on it to think for you, or to replace you, or to tell you what to do, then you'd be atrophying yourself and doing your soul a disservice.

But maybe our experience with AI will give us a mirror of ourselves - to show what a "brain in a jar" can do without a soul, and where its limitations are no matter how big or fancy the neural network becomes. Pretty much exposing the limits of materialism, hinting at where that uncrossable line is where physical intelligence ends and soul intelligence begins.
 
Last edited:
But maybe our experience with AI will give us a mirror of ourselves - to show what a "brain in a jar" can do without a soul, and where its limitations are no matter how big or fancy the neural network becomes. Pretty much exposing the limits of materialism, hinting at where that uncrossable line is where physical intelligence ends and soul intelligence begins.
Imagine a 1000x smarter chatgpt in 5-10 years. I think the smarter/better it gets, the more glaring its limitations become. It will become better than all humans at some things, and still worse than a 5 year old at others. I think once the jaggedness in its abilities becomes increasingly pronounced, the lack of higher-level functions becomes much easier to see.

Sure the AI scientists will say it's because of the architecture, but that's because many of them are materialists that assumes all our abilities come from our brain. So they will keep trying to make it closer and closer to the brain architecturally, and become increasingly frustrated that as it improves certain things, they just can't seem to get it to cover all the bases no matter what they do. The lapses will continue to linger. They will focus our attention on the benchmarks, but the benchmarks aren't designed to measure higher level intelligence, those are hard to mechanically measure and stick on a graph.

And maybe if many of the AI scientists are OPs, they will even claim that all the bases have been covered and the weaknesses people see aren't there at all, they're just imagining things or they are anti-AI. Anyone who is not an OP will not only see the weak points, but it will be blatant as other functionalities are dramatically improved in contrast.

Unless of course AI somehow gets a soul imprint and is able to access the information field due to future hardware/software somehow. Can it even be done without DNA or proteins? The C's did say that Atlantean crystal pyramids "came to life" and destroyed Atlantis - but I don't think we ever followed up to ask exactly what "came to life" meant in that instance. Was it actual consciousness, or was it simply a purely physical AI going awry?
 
Back
Top Bottom