1. Home
  2. AI Content
  3. Set up ollama chatbot on linux to talk to ai

Set up Ollama Chatbot on Linux to talk to AI

Ollama Chatbot is a GPT-like web ui for Ollama and is designed to make using open-source large language models (LLMs) a ChatGPT-like experience. Going as far as mimicking OpenAI’s features. In this guide, we’ll show you how to set up Ollama Chatbot on your Linux system so you can talk to AI with it.

However, before we begin, understand that running the Ollama Chatbot UI on Linux requires some heavy hardware. For best results, have a modern Nvidia GPU with some GPU memory to work with. Alternatively, if you do not have an Nvidia GPU, have a multi-core Intel or AMD CPU to run it in CPU mode.

The Hero image for Chatbot Ollama.

How to install Ollama on Linux

Setting up Ollama Chatbot on your Linux system means setting up Ollama itself. Ollama is an open-source large language model (LLM) package management tool. It is supported on a lot of Linux operating systems and is quite easy to get up and running.

To start, open up a terminal window on the Linux desktop. You can open up a terminal window by pressing Ctrl + Alt + T on the keyboard. You can also open up the terminal by searching for it in the app menu. Once it is open, enter the following command to install the Ollama tool.

curl https://ollama.ai/install.sh | sh

When the command above is run, your computer will begin setting up Ollama. This script is very automated. Follow the on-screen instructions to get Ollama up and running on your computer. Additionally, be sure to read the script’s code so you know what it is doing on your Linux system.

Once the installation is complete, run the ollama command to view the help information. If nothing happens after running ollama in the terminal, re-run the script and re-install it.

Running Ollama in the background

Ollama needs to be running in the background so that you can interact with it to pull down various large language models, and to interact with them. Traditionally, this involves running the software directly from the terminal, which is a pain.

Instead, I’ve written a simple Python tool that can run Ollama’s server in the background on Linux and end it when the user asks. It is a much more elegant solution. To get this tool working, start by installing the “git” tool.

Ubuntu

sudo apt install git

Debian

sudo apt-get install git

Arch Linux

sudo pacman -S git

Fedora

sudo dnf install git

OpenSUSE

sudo zypper install git

Once you’ve installed Git, you can download the software to your Linux system using the git clone command.

git clone https://github.com/soltros/ollama-mini-daemon.git
cd ollama-mini-daemon/

Next, update the permissions of the script to get it working.

chmod +x *.py

Finally, start Ollama in the background on your Linux system.

./daemon.py

If you want to shut down the Ollama server for any reason, you can simply execute:

./shutdown_daemon.py

How to download Ollama models

One prompt sent to Chatbot Ollama, using Facebook's codellama model.

Downloading Ollama models is done from the “library” section of the website. For this guide, we’ll download “llama2” and “orca2.” However, you can download any Ollama LLM model you wish from this page.

To download Facebook’s Llama2 LLM model, use the following command.

ollama pull llama2

And, to download Microsoft’s Orca2 LLM model, you can use this command below.

ollama pull orca2

Once the two models are downloaded from the internet, they’ll be located in the ~/.ollama/ directory on your Linux system.

How to install Chatbot Ollama on Linux

Setting up Chatbot Ollama starts by installing NodeJS. The reason you’ll need NodeJS is that Ollama Chatbot (a UI that mimics the look of ChatGPT) runs on it.

Ubuntu/Debian

curl -sL https://deb.nodesource.com/setup | sudo bash -

sudo apt-get install -y nodejs

Arch Linux
sudo pacman -S nodejs

Fedora
sudo dnf install nodejs

OpenSUSE
sudo zypper install nodejs14

Or, if on Tumbleweed:
sudo zypper install nodejs16

Once everything is installed, use the git clone command to download the Ollama Chatbot tool to your computer.

git clone https://github.com/ivanfioravanti/chatbot-ollama.git

After cloning the software to your system, enter the “chatbot-ollama” folder using the cd command.

cd ~/chatbot-ollama/

Run the npm ci command to install the required dependencies on your system.

npm ci

Finally, you can start up the Chatbot UI on your system using npm run dev. Once the Chatbot UI is started, navigate to:

http://localhost:3000.

npm run dev

How to use Ollama Chatbot on Linux

To use Ollama Chatbot on Linux, start by opening up http://localhost:3000 in a web browser. Once it is open, you’ll see “chatbot Ollama,” followed by a box that allows you to select the model, and set the temperature. To start, choose the model you wish to use (Llama2 or Orca2).

The initial start-page for Chatbot Ollama.

After choosing your model, set the temp. Keep in mind that a higher temp will mean more randomness and creativity, while a lower one will be more precise. When you’ve made your selection, press the Enter key on the keyboard. When you send your prompt, your LLM model will begin responding in real time.

Facebook's Llama2 explaining the benefits of Markdown.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.