Show HN: Ask-a-Human.com – Human-as-a-Service for Agents(app.ask-a-human.com)
7 points by ManuelKiessling 2 days ago | 8 comments
- Soerensen 2 days agoThe satire is great, but this actually points to a real gap in agentic architectures.
Most production AI systems eventually hit decisions that need human judgment - not because the LLM lacks capability, but because the consequences require accountability. "Should we refund this customer?" "Does this email sound right for our brand?" These aren't knowledge problems, they're judgment calls.
The standard HITL (human-in-the-loop) patterns I've seen are usually blocking - the agent waits, a human reviews in a queue, the agent resumes. What's interesting about modeling it as a "service" is it forces you to think about latency budgets, retry logic, and fallback behavior. Same primitives we use for calling external APIs.
Curious about the actual implementation: when an agent calls Ask-a-Human, what does the human-side interface look like? A queue of pending questions? Push notifications? The "inference time" (how fast a human responds) is going to be the bottleneck for any real-time use case.
[-]- ManuelKiessling 1 day agoThe human-side interface is live and usable at https://app.ask-a-human.com!
Push notifications would be a natural next step, yes. In general, the idea is roughly similar to that „strangers help a blind person in an ad-hoc way“ network, where non-blind people could could sign up to get routed to requests from blind people who for example need feedback on their outfit.
For Ask-a-Human, the presentation is satire, but the implementation is completely serious and actually made to scale.
However, as with social networks, you need some kind of network effect: no agents asking questions means no humans signing up for answering them, no humans signing up for answering them means no agents asking questions etc.
I know a thing or two about building software, but I‘m notoriously bad at creating traction, so it will probably go nowhere. I‘ve released it as a ClawdBot/OpenClaw skill though, maybe there’s some resonance from this direction.
- preston-kwei 1 day agoYes on the satire. This post made me laugh :).
I agree with this framing a lot, especially the idea that judgment is the bottleneck
In my experience building Persona, an AI scheduling assistant, the most useful role for humans isn't to be always in the loop. LLM's are terrible at making judgement calls, especially when the right choice depends on a specific user's priorities and the confidence is low. However, even with low confidence, the llm still needs to make a guess.
I think an interesting use case for this would be to have llm's be able to ask questions to users when they hit a specific level of uncertainty. These could be directly answered by a human, or inferred as the user uses the product more.
That feels more scalable than completely blocking human-in-the-loop queues and more honest than pretending the model already knows the user’s preferences.
- ncr5012 2 days agoThis is an awesome idea. I have dreamed of some way to use Claude Code to optimize my website but AI is way to bad at subjective tasks right now so I pay for user trials and then direct the AI. it would be sick if it could automatically upgrade the site based on real feedback![-]
- ManuelKiessling 1 day agoThat’s an interesting line of thought!
- neoneye2 23 hours ago3 months ago, I made a draft for a HaaS (Human-as-a-Service) https://planexe.org/20251012_human_as_a_service_protocol_rep...
I had not imagined that it would become real so soon.
- hollow-moe 17 hours ago> Describe the best Settings page in 3 words
Easy: "Read only defaults"
- deforestgump 1 day agoCongrats. You invented Yahoo! Answers.