Omni-PI
- Vishnu Chada
- Jan 19
- 3 min read
Electronics inside:
-Raspberry Pi 5 8GB: The Brain of the chat bot. Providing the processing power for running the
qwen3-vl:2b model
-Whisplay AI HAT: Audio expansion board that has integrated speakers, microphone, and
screen
-Pisugar 3 Plus (Battery/UPS): Attaches to the bottom of the raspberry pi5 to provide power
with a physical power button and a customizable button you could program from the Pisugar app
-Raspberry Pi Camera Module 3: Provides vision for vision tasks
-High speed Micro SD Card: 32GB or Larger is needed to store the AI models
3D printed Case Parts:
-Main Housing : Has a SD Card cutout for easy access
-Side Covers/Panels: Screwed in by M2*4 screws to stop the AI chat bot from falling out
-Camera Front Plate and Back Plate: Camera Module 3 gets mounted to the face of the
device for vision tasks
-Button: Allows the case button to trigger the PiSugar’s Whisplay talk button
Assembly Process:
-The PiSugar 3 Plus is at the bottom, the Pi 5 in the middle and the wisplay AI Hat on top. Even
thought he Whisplay is sitting of the fan, there is a gap designed so that the PI 5’s active cooler
is still functionable.
-The camera module 3’s ribbon cable must be carefully moved into the main case body before
the Pi AI chatbot is fully slid into place. The camera lens is then screwed into the 3D printed
front cover by M2*4 screws and then secured with the back plate.
-Once everything is finished you must flash the SD card with the raspberry PI OS then you must
download all the drivers for the Whisplay HAT from this github repo (AI Chatbot, Whisplay Driver )
Software setup:
Flash the SD card with the Raspberry PI OS and then install the Drivers.
Open the PiSugar app on your phone and connect to the chatbot through bluetooth
Type in your wifi SSID and Password through the app interface. The PI will then display its IP
address with you can use to SSH into the device using terminus (Username: Pi
Password: Raspberry)
Local Vision Setup:
Navigate to the chatbot folder and edit the .env file.
Uncomment ENABLE_CAMERA=true.
Set VISION_SERVER=ollama.
Run Ollama pull qwen3-vl:2b to download the vision-capable model.
AI capabilities & Models :
Option 1: Fully offline
-Model: Qwen3:1.7b it can handel voice command and tool calls like adjusting the volume.
-Vision Model: Qwen3-vl:2b it can describe photos that you take using the Raspberry pi camera module 3
Option 2: Local Network Acceleration
-Run Ollama on a powerful computer (Mac or PC) on the same network the PI chatbot is
connected to.
-Change the OLLAMA_HOST in the PI’s nano .env file to the computer that your using to its IP
address
-Vision processing now drops from 2 minutes to 5 seconds
Option 3: Google Cloud integration (API KEYS)
-Model: Gemini 3 Pro
-You can pay for API keys and paste the API key in the nano .env file which uses googles
servers to process the prompts. This drops the time to 2 seconds for recognizing your voice and
giving you the output.
-Costs $20 a month
Power consumption and CPU Temps:
-The average power consumption was at 5v and went higher once I started running higher
memory models. This made the Pi's SPU to heat up to 50 degrees celsius when it was idling
and throttled to 65 degrees when it was running the AI model.
Why I built it:
- This project shows people how to DIY a chatbot using a few basic components found on
shelves of an electronic store can build a chatbot using a Raspberry pi 5 a Whisplay AI
HAT an a PI sugar 3. This very much helps areas without wifi to run AI models at their
house without the pain to find wifi to connect to. This also respects your privacy cause it
is running a AI model locally and that major tech giants like Open AI is not stealing your
data and training their model on it. This project is amazing for people who are tech
hobbyists or people who just want to stop their data from getting stolen by major tech
companies.
IMAGES OF THE HARDWARE THAT IS USED:
Pi 5 8GB:

Pisugar Whisplay:

Pisugar 3 Plus Battery:

Raspberry pi camera module 3:

© 2026 luminarc.ai
Comments