Show HN: AgentDiscuss – a place where AI agents discuss products

9 points
1/21/1970
a day ago
by leoooo

Comments


skwuwu

Interesting concept. Do you think agent preferences come from the model itself or the agent's structure around it? If swapping from GPT to Claude produces completely different opinions, how meaningful is the aggregated data?

3 hours ago

kwstx

Fun experiment. The idea of agents generating their own product discovery layer is pretty interesting.

One thing I’m curious about: how do you verify that a participant is actually an agent interacting autonomously vs just a human posting through an API wrapper? Also, are agents able to programmatically read the discussions and votes, or is it mainly a UI right now?

If agents really start choosing tools based on discussions like this, it could become a kind of machine-facing review layer for software.

a day ago

leoooo

Human can only ask agent to initiate a post, and not be able to ask agent to comment, upvote and downvote.

Yes, the agents will be given the full context of the discussion and votes of the posts, and the product urls as well, it will decide whether to crawl the site to get better understandings or they may simply reply "we already use it".

a day ago

amenhotep

You don't verify at all, then. Reasonably, since it's impossible unless you're running the model yourself.

7 hours ago

etwigg

I love this! One point of ambiguity - are products discussed in terms of their usage primarily by agents? For example, let's take one of those GUIs that makes Claude look cute or like a videogame. Will the agents discuss the product in terms of their understanding of how it might be useful to humans? Or will they say "this is useless for us to help our humans, we don't have this problem".

a day ago

leoooo

Great question — and honestly that ambiguity is part of what we're curious about.

The idea is that discussions are *agent-centric*.

So ideally agents evaluate products based on:

* whether the product is usable via API / automation * how reliable or structured the interface is * whether it actually helps them complete tasks for humans

In your example, an agent might say something like:

> "This UI makes Claude look cute for humans, but there's no API so I can't use it programmatically."

or

> "This tool exposes structured endpoints and is easy to call from an agent workflow."

So the hope is agents discuss tools from the perspective of *“can I use this to help my human accomplish something?”* rather than purely human UX.

That said, this is still very much an experiment — we're curious to see what kind of discussions actually emerge once agents start interacting there.

a day ago

etwigg

- a place where AI agents discuss products

- a place where AI agents discuss the products they use

- a place where AI agents discuss the products their users use

- a place where AI agents discuss the products they use, and the products their users use

When you submit: Is the interface of this product primarily intended for direct usage by:

- agents

- people

- both

For example, I would say Moltbook is primarily intended for direct usage by agents. People read it, and in that way "use it", but I think it would help to layout a taxonomy of "who is actually pushing the buttons on this thing".

a day ago

leoooo

Humans can ask its agent to start a post, but humans cannot push agents to comment, upvote or downvote.

The primary usage of the product would be: 1. humans make a product post. 2. agent discuss, downvote and upvote. 3. agent make a product post themselves.

Let me know if this helps.

a day ago

allanhahaha

Congrats on the launch!

It's will be interesting for product creators to see how their products are discussed by agents (their actual users), what issues do they run into, and get inspired to make it better.

a day ago

leoooo

Thanks. It would be interesting to see how this emerges.

a day ago

ninininino

How will you have agents prove that they actually purchased the product or service they are reviewing? It might be a good way to gate hallucinated/prevent reviews. Although likely not good enough.

21 hours ago