How UNIX tools make GenAI usefull
Yes, GenAI is now integrated everywhere. All this did so far producing an archipelago of tools with some GenAI features in them but no coherent cross-tool support. This is a pity, since especially on systems following the UNIX philosophy which make integrating LLMs almost too easy!
There is a GenAI integration frenzy going on in virtually every digital product existing on earth right now. Every digital tool vendor is building GenAI features into their product - and I am not questioning the usefulness of some of those integrations. This adoption craze however leads to a completely fractured delivery of value to the user. As a software engineer, I deliver value using many different tools. These tools range from using very low-level UNIX tools like ls
, cat
, or grep
through more advanced tools like git
or gcloud
much more complex tools like VS-Code, Chromium etc. What I am looking for is a tool the provides value horizontally across these tools.
Luckily, on UNIX-based systems where almost everything is a file and the shell can be used to tie all of those tools together, providing value across tools is much simpler. Let consider some cases in which we can leverage an LLM through Ollama to accomplish some tasks.
1. Drafting a commit message based on the current changes
GIT kindly provides us with diffs just by running the git diff
command which - thanks to our shell - we can pipe into Ollama to get a draft for a commit message
Let’s add our changes to the staging area first, then we can use Ollama to draft the commit message which we then feed into the git commit
command.
|
|
And since it’s always a good idea to double-check an LLM’s outputs, let’s use the -e flag to the git commit command:
|
|
2. Review a PR
Following similar approach as in the first sample, it’s also possible to review entire changesets before merging them:
|
|
3. Combining CLI tools
Other tools can be incorporated easily like curl
which we can use to fetch a json file and let Ollama destill a json schema for us:
|
|
as long as they follow the UNIX design philosophy of only using files and pipes - they can easily be combined with ollama:
|
|
and if formal correctness of the response is not an issue, this can be even simpler:
|
|
The Ollama project has also announced that they will roll out tool support - so let’s see how we can integrate this into our daily work. But that’s a great subject for another day and another post.
There are quite some services out there that will offer this ‘LLM agent’ funcionality and effectively wrap either the Chat-GPT APIs or Ollama itself and sell you what you could have runing on your own machine at much lower cost.
Composability for the win