Meet Pi-CARD: Serving up a digital assistant on Raspberry Pi

LLMs running on a dedicated card: The final frontier as hacker makes it so

Consider your wish for an AI digital assistant that runs locally and offline officially granted. Not by a major industry player, naturally – your personal data is too enticing – but by a guy on GitHub who built one to run on a Raspberry Pi. 

Data scientist and machine learning engineer Noah Kasmanoff developed the Raspberry Pi – Camera Audio Recognition Device, or Pi-CARD, to do "anything a standard LLM can do in a conversational setting," all without needing to rely on, or share data with, the outside world. Pi-CARD can have conversations, and if a camera is added to the Raspberry Pi it can be asked to take photos, describe pictures, and ask questions about the image. 

"I wanted to create a voice assistant that is completely offline and doesn't require any internet connection, Kasmanoff said in Pi-CARD's readme. "I wanted to ensure that the user's privacy is protected and that the user's data is not being sent to any third party servers."

And he managed to get it all up and running on a Raspberry Pi 5 connected to a USB microphone, a speaker and a camera. 

Kasmanoff told The Register that he sees Pi-CARD's purpose as being an offline assistant that is to be able to access files on a local machine. "So while not requiring Wi-Fi, [Pi-CARD] can still tell you info like what you wrote in your journal two weeks ago," Kasmanoff cited as an example. 

"[Pi-CARD] can talk to users, tell them jokes, answer questions, and so on in the same way ChatGPT can do without being connected to the internet," Kasmanoff told us. It's also able to remember things it has discussed during a chat and doesn't require constant use of its wake word, making it more capable than some other digital assistants currently on the market (ahem, Siri).

Of course running locally means it can be a bit slow, and lacks a lot of data that more cloud-centric LLMs have access to. "The system is designed to be a fun project that can be a somewhat helpful AI assistant," Kasmanoff said, adding that there are "a lot of improvements to be made." 

One thing he'd like to be able to do in the future is add the ability to interrupt Pi-CARD while it's speaking, as was demonstrated by OpenAI on Monday when it unveiled GPT-4o. Kasmanoff also wants to improve response time with the system. 

Pi-CARD can also be connected to external APIs and other services if a user wants to take it online, and can be used to control external devices. Don't expect it to speak to you in the voice of Jean-Luc Picard, though. 

"I wish," Kasmanoff told us when asked if Star Trek star Patrick Stewart's voice is available for his Pi counterpart. "I am using a generic dictation system, which while not exciting, gets the job done."

Pi-CARD: What's inside

There are a couple familiar LLMs running under Pi-CARD's hood - or C++ versions of them, at least. 

Pi-CARD uses C++ ports of both OpenAI's Whisper and Meta's LLaMA to do its speech and image recognition, respectively. Beyond that is the Python file that operates the assistant itself, and some other necessary software, but that's it for the most part: Just a Python script and those models. 

Kasmanoff told us he plans to continue tinkering with Pi-CARD as time allows, but it is just a hobby project. 

"I believe that the most useful part in AI assistants like this is to mean less time on our phones [or] absorbed on computers," Kasmanoff told us. "[Pi-CARD] has helped me understand what is possible so far." 

Kasmanoff added that the hardware constraints were central to the project too, saying he wanted something able to run without needing as much computing power or energy as larger models. Or, of course, contributing to an AI ecosystem that "depends on sharing your data," as Kasmanoff put it. ®

More about


Send us news

Other stories you might like