I wish the scene was more collaborative - instead of everyone writing their own. But I guess this is the llm curse - too easy to start. I am afraid it will all go in the LangChain direction with VC funding designs that are not yet ready solidifying choices that would normally be superseded.
drunkan 1 days ago [-]
Lots of good ideas and divergent methods and sources here cheers for the link
kenforthewin 23 hours ago [-]
This is .. honestly a great synopsis of Atomic and its design tradeoffs. Thanks! Giving commonplace a look.
I am open to changing these instructions - it cannot be about just making your system look better - but I'll try to incorporate genuine ideas how to improve these reviews.
I sure love when the local-first software defaults to a non-local option for its main feature.
embedding-shape 1 days ago [-]
Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.
kenforthewin 24 hours ago [-]
Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.
embedding-shape 23 hours ago [-]
But local-first !== defaults to local inference, right?
kenforthewin 23 hours ago [-]
I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.
stonogo 21 hours ago [-]
Maybe this will be a clearer question: What does "local-first" mean in the title that you typed in for this HN submission?
kenforthewin 19 hours ago [-]
Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.
kenforthewin 1 days ago [-]
I'm not sure what the dunk is supposed to be here .. Atomic supports the exact same feature set with local models as it does for OpenRouter. Is your gripe just that Openrouter is the first option in the dropdown?
max-privatevoid 1 days ago [-]
Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)
kenforthewin 1 days ago [-]
Did you think local-first meant how a dropdown is sorted?
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
max-privatevoid 1 days ago [-]
The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
kenforthewin 24 hours ago [-]
It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.
Lalabadie 24 hours ago [-]
Yes, hah.
"Local-first, your data never leaves the computer! Except once to go to the biggest information hoarders on the Internet."
kenforthewin 24 hours ago [-]
Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.
danielgall500 1 days ago [-]
Awesome! How did you find using Tauri? Were there any particular pain points?
bryanhogan 1 days ago [-]
Generally curious, how is this different from pointing Claude Cowork at an Obsidian Vault?
kenforthewin 1 days ago [-]
Biggest difference is Atomic leverages an LLM to auto-tag and a text embedding pipeline to drive semantic search - so the knowledge base is self-organizing. The bet here is that having an agent grep the filesystem is fine for a carefully curated, relatively small set of markdown files. It starts to degrade if you approach your knowledge base as a place to put everything: personal notes, articles you find interesting, entire textbooks if you want to. Having a vector database in this context is pretty much required past a certain scale; a filesystem-based approach is just an incredibly inefficient way to do retrieval in this context, and your agent is bound to miss important data points.
thomas_viaelo 1 days ago [-]
Does the LLM auto-tagging and embedding pipeline run on the device, or are they remote calls?
redrove 23 hours ago [-]
So an Obsidian plugin? Got it.
kenforthewin 23 hours ago [-]
One can imagine an obsidian plugin of any arbitrary level of complexity, given it's written in a Turing-complete language.
ariejan 1 days ago [-]
Killer feature: add audio transcription. Record that meeting; just tell the app what you want to remember. It gets transcribed and then processed like any other note.
CrypticShift 1 days ago [-]
I would love to be able to do the clustering from a CSV instead of a collection of Markdown files. I know I can easily generate the files, but I used to do this directly for very short text inputs (just titles or words) on nomic.ai (before they pivoted to 'Enterprise')
Linell 1 days ago [-]
I've been tinkering with my own version of this idea off and on for months, and it's great to see someone finally make the thing that I've been wanting since LLMs hit the scene. Congrats on everything you've shipped!
sdevonoes 1 days ago [-]
They keep adding this “cloud of dots” where each dot represents a concept or something you wrote, and they are linked to other dots… sure it’s pretty the first time you see it, but it’s not useful at all beyond that
baddash 1 days ago [-]
but there are a lot of them
atomicnotlocal1 21 hours ago [-]
The app Atomic.app com.atomic.app (signer: Developer ID Application: Foldingspace Labs LLC (D3SX98L77N)) will not run without access to fonts.googleapis.com:53, fonts.gstatic.com:53, api.fontshare.com:53.
kenforthewin 17 hours ago [-]
nice username :)
fair point, the app makes requests to load fonts. we'll fix that next release.
voidhorse 1 days ago [-]
I feel like most of these applications all boil down to "Obsidian but with AI integration baked in up front". It'd be interesting to see approaches that actually rethink commonplaces of the experience (graph view etc) rather than just reproduce the same thing but "with ai"
Am I the only one who feels a bit betrayed after reading LLM text? I am not even willing to try out the app after I notice… which is a shame.
At least polishing the obvious parts would help a lot and is not that much work.
kenforthewin 23 hours ago [-]
Thanks for the feedback. Yeah, I admit copywriting is not my forte. i'm a solo dev, I'm focusing most of the time and energy on the product itself. There are always 100 things I could be polishing for Atomic - social media presence, website, docs, etc. even with AI there just aren't enough hours in the day - you have to triage somehow.
supern0va 1 days ago [-]
As someone that makes regular use of the em-dash, comments like this are rather maddening.
I still refuse to self-censor to avoid having my actual writing get flagged by someone as LLM written.
Lalabadie 24 hours ago [-]
I don't mind the em-dash, but that whole front page is very much "We prompted ourselves a web site"
nathan_compton 1 days ago [-]
Maybe I'm just spoiled with a large working memory, but I don't want an AI agent thinking or remembering of synthesizing for me. Seems like a great way to never have a new idea.
It is the second llm wiki on frontpage today!
I wish the scene was more collaborative - instead of everyone writing their own. But I guess this is the llm curse - too easy to start. I am afraid it will all go in the LangChain direction with VC funding designs that are not yet ready solidifying choices that would normally be superseded.
The reviews are done automatically - here are the instructions: https://github.com/zby/commonplace/blob/main/kb/agent-memory...
I am open to changing these instructions - it cannot be about just making your system look better - but I'll try to incorporate genuine ideas how to improve these reviews.
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
https://atomicapp.ai/getting-started/ai-providers/
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
"Local-first, your data never leaves the computer! Except once to go to the biggest information hoarders on the Internet."
fair point, the app makes requests to load fonts. we'll fix that next release.
...except my Android phone LOL
Am I the only one who feels a bit betrayed after reading LLM text? I am not even willing to try out the app after I notice… which is a shame.
At least polishing the obvious parts would help a lot and is not that much work.
I still refuse to self-censor to avoid having my actual writing get flagged by someone as LLM written.