gptel for general LLM interactions including doc generation, summarization, and a handful of other tasks.
aider.el for spec-based coding.
claude-code.el for architect mode + small code gen. It’s damn expensive if you need to generate a lot of code. IndyDevDan on YT has a good video on how to use Claude for architect mode but shunt code gen over to aider via MCP where you can then configure whatever LLM you want for code. In his example he shunts over to a gemini pre-release because at the time of recording it was free to use other than Google’s usual surveillance.
You didn’t ask but I find Openrouter to be pretty darn useful for multi-plexing access to models regardless of provider as well.
I forgot to mention that I am completely illiterate with regards to LLMs, my first interaction with ChatGPT was last friday, I guess. So I haven’t understand a lot from your explanation .
Searching quickly for architect mode I find reference for it, only, with aidermacs, not on the github page of claude-code.el.
Pretend you are the Lead Product Engineer. And you are talking to your team of Staff/Senior Engineers about the product roadmap goals and capture that info as a specifications document. You are talking to one or more LLMs in “architect mode”.
Once the spec has been fully captured, the Staff + Sr Engineers break out with less senior engineers and start coding and testing. That’s the code-gen/coder mode. Often a different set of LLMs.
The various model providers don’t document that flow very well at all. Even that the products have special flags and configuration to enable that sort of a flow.
The mentioned IndyDevDan channel on YT goes into that and is a good starting reference.
So many possibilities and so little time to give each of them a test,
Thank you for all those explanations. I will have a talk with some of my colleague to try to have a better view on what could be expected from me and those beast. At least I have tracks to follow.
@montaropdf , what did you end up using? And what have been your impressions so far? As soon as Guix System finishes installing I’m going to start playing around with gptel and claude-code.el.
Not OP here, but I ended up with both copilot (at work) and gptel (at work and at home):
copilothas copilot-mode, which allows Copilot-driven autocompletion. The experience is better than on VSCode, since you can set copilot-idle-delay to a value that does not interrupt your train of thought.
gptel works with local models, i.e. ollama, and can be used for rewriting as-well as chatting.
Copilot (in VSCode[1]) is a bit of a hit-and-miss. Sometimes, the network requests are fast and you get a suggestion in the middle of a sentence or of a function call. The suggestion seems almost right, but in the sense that an occasional turd almost seems like some molten chocolate. For me, personally, that’s distracting. But if you use it in Emacs with a copilot-idle-delay to 2, then it acts only when your train of thought stopped:
But Copilot as a service also provides a chat. And while I could use copilot-chatfor that, I dislike that it has lots of external dependencies. Oof[2].
gptel on the other hand only uses curl and can also chat with Copilot and local LLMs, which allows me to run some chats offline with small models. Given that I use chats mostly as addition for web searches or for quick reviews, it works fine for me.
our toolchain is really messy, thanks to MSVC, and I didn’t adapt it to allow Emacs, yet ↩︎
@ashraz , thanks for mentioning copilot-mode, I’ll have to give that a try.
gptel works with local models, i.e. ollama, and can be used for rewriting as-well as chatting.
Just to clarify, were you suggesting that gptel also works with local models? I’m currently using it with ChatGPT, Claude, and Gemini through their respective APIs. I’m still working out the kinks but am optimistic about incorporating gptel into my workflows. If you haven’t tried this I would recommend. Here’s a demo Karthink put together a little more than a year ago to showcase some of the functionality.
I’m currently using Guix System in a UTM VM so I won’t try running any local models until I can set it up on bare metal. Looking forward to trying that, though. As an aside, kinda wish I would’ve picked up a Framework instead of a MacBook last year…