Show HN: An attempt to grow a mind – building software with an inner life

13 points
1/21/1970
2 days ago
by shahabebrahimi

Comments


spaldingcactus

Alternative signin methods?

2 days ago

shahabebrahimi

Unfortunately no. Google Auth was the easiest method for me to implement. Your data remains private.

2 days ago

esperent

It's understandable but I do have to say, all the initial beautiful prose on a black screen, several pages... And then a big white Sign in with Google, completely undercuts the message. I notice I had an almost visceral reaction to that. Maybe you can present it better somehow?

a day ago

pixel_popping

I felt the exact same! And I was absolutely "marketed" until the last frame, then I decided to drop because of this. Please OP add a regular signup method that doesn't involve a third party.

a day ago

shahabebrahimi

Fair point. I'll fix it.

a day ago

sliamh11

This is fascinating. Are the 'moments' pre-defined or generated? What's the LLM behind it and what's the macro-level architecture?

a day ago

shahabebrahimi

Thanks. Moments are not predefined at all. They change over time based on many things, including previous states and changes in the Anima's mood and perspective. Read more here: https://shahabebrahimi.substack.com/p/an-attempt-to-grow-a-m...

a day ago

dnnddidiej

Jason is quiet for now, reflecting on your words.

Should be ready to talk in 23h 58m

Cute 429!

a day ago

shahabebrahimi

Do you think the limitation is too much? LLM calls in the background are huge, actually.

a day ago

dnnddidiej

Dont know. For a game that you don't want to be addictive, I think it is a good idea.

12 hours ago

fcpguru

this is really great. I thought about building something like this for a while now. well done.

2 days ago

shahabebrahimi

Happy to hear. Please try for a few days. You can give feedback in the app.

2 days ago

_wire_

Isn't it the case that everything pours from the user's container into the remotes to make this work?

Is it also the case that the more it knows the larger the token burden to reinstate "awareness", leading to an ever growing expense of recovering state?

Isn't this entire scheme about getting behind every sort of firewall to dump users' most private details and context into the apparatus of AI companies with no limit on retention and use?

Isn't it also true that privacy is undefined and that the infrastructure and these services are directly plumbed for the same kinds of surveillance that Snowden exposed?

Isn't it the case that users are expressing implicit consent to be exploited in any / every conceivable manner through the data they exfiltrate and are giving this prize of dominion over themselves to the barons of industry at the user's own expense?

Isn't it the case that if the assistant works as advertised the users dig pits for themselves out of ever growing dependency on others for the most person aspects of their lives? Isn't it true that if the users could effectively opt-out of this once they get started, this option serves only to prove that the service is a disposable gimmick?

All of these observations have applied to every aspect of personal computing since its inception, and a review of history is pretty damning as political and economic slavery is being manifest even among the elite positions of society before AI, and AI magnifies the hazards by orders of made l magnitude.

Dear AI, please explain how or why these observations are inappropriate, wrong-headed, or based on faulty assumptions.

2 days ago

shahabebrahimi

You're right that the content goes to an LLM provider. That's unavoidable if the thing is to work. I don't (and won't) sell your data. But you're right that I can't control what LLM providers do with API traffic under their policies. That's a real tradeoff. I think that's a valid concern, and I don't have a great answer for it.

a day ago

atemerev

I have built a persistent personified agentic assistant with self-awareness and neuroscience-inspired cognitive architecture: https://lethe.gg

2 days ago

kseistrup

Tlon's bot also seems to have persistent memory:

* https://tlon.io/

The two may have vastly different implementations, though.

11 hours ago

shahabebrahimi

Looks interesting. Different goals, though. Yours is a memory layer for an assistant that serves you better. What I'm trying to build is something that has its own experience.

a day ago