Rendered at 10:07:30 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
socialinteldev 6 hours ago [-]
the memory engine question is the crux — most 'shared memory' approaches either go vector db (semantic search loses precision on code) or graph (precise but expensive to maintain across repo changes). curious which direction you went. one thing that works surprisingly well for cross-repo context: storing explicit schema contracts as structured facts rather than raw embeddings. agents can retrieve 'what does /api/users return' without semantic fuzziness
hg07 5 hours ago [-]
Hey Husain here, cofounder of Modulus
Good point, we do just that - storing the explicit schema as structured facts.
Relevance is based on similarity threshold of the embeddings of the repo purpose and schema fetching is based on structured facts.
handfuloflight 8 hours ago [-]
How does your memory engine actually work?
hg07 4 hours ago [-]
Hey Husain here, co founder of Modulus
Can talk about this for hours but heres a summary
Every repo added by the user is analyzed for technical specifications which are stored without the code itself.
Updated every time a significant change is made to the codebase.
These are used at the time of retrieval by checking for relevance of the connected repos and extracting them as relevant context to the ongoing task.
Hope that answers your question!