LLM packages which one to choose?

Hello,

In the coming month the customer I work for will allow to use LLM (aka AI) for software development. It is currently reviewing some of them.

So, I am looking for a package to dialog with those beasts.

Currently, I have found:

  • copilot, which is specific to one LLM, copilot from github
  • gpt.el
  • ellama

For what I have seen, copilot provides auto-completion/auto-suggestions, while gpt.el and ellama, seems to be more for “chatting” with LLMs.

  1. Does any of gpt.el or ellama can be used for auto-complete and/or auto-suggestions?
  2. Do you know any other packages allowing interactions with LLMs?
  3. What are the differences between gpt.el and Ellama and any similar package you may know of?

gptel for general LLM interactions including doc generation, summarization, and a handful of other tasks.

aider.el for spec-based coding.

claude-code.el for architect mode + small code gen. It’s damn expensive if you need to generate a lot of code. IndyDevDan on YT has a good video on how to use Claude for architect mode but shunt code gen over to aider via MCP where you can then configure whatever LLM you want for code. In his example he shunts over to a gemini pre-release because at the time of recording it was free to use other than Google’s usual surveillance.

You didn’t ask but I find Openrouter to be pretty darn useful for multi-plexing access to models regardless of provider as well.

As the author of an LLM Chat AI Assistant for Emacs I think I should mention:

I’m currently in the process of looking at some form of autocomplete but of course for local LLMs you are a little limited on speed and accuracy.

Although ollama is half of the name it can also link up to ChatGPT, Claude, Grok and Gemini.

So, if I understand correctly, your package is for local, like on the PC itself, LLMs?

As far as I know, in my situation, it will be one of the numerous LLMs in the cloud.

Sorry for the late reply,

I forgot to mention that I am completely illiterate with regards to LLMs, my first interaction with ChatGPT was last friday, I guess. So I haven’t understand a lot from your explanation :worried:.

Searching quickly for architect mode I find reference for it, only, with aidermacs, not on the github page of claude-code.el.

No worries.

As to architect versus other modes:

  1. Pretend you are the Lead Product Engineer. And you are talking to your team of Staff/Senior Engineers about the product roadmap goals and capture that info as a specifications document. You are talking to one or more LLMs in “architect mode”.

  2. Once the spec has been fully captured, the Staff + Sr Engineers break out with less senior engineers and start coding and testing. That’s the code-gen/coder mode. Often a different set of LLMs.

The various model providers don’t document that flow very well at all. Even that the products have special flags and configuration to enable that sort of a flow.

The mentioned IndyDevDan channel on YT goes into that and is a good starting reference.

They are not really just for chatting. Ellama has a lot of nifty
functions for many use cases, along with a nice transient-menu
interface.

The author also wants to add auto-completion functionality.

So many possibilities and so little time to give each of them a test,

Thank you for all those explanations. I will have a talk with some of my colleague to try to have a better view on what could be expected from me and those beast. At least I have tracks to follow.