Projects » Use your LOCAL LLM (ollama) with Active NPC

HAPPY NEW YEAR EVERYONE!

A special holiday gift for the adventurous.

Before Ozone almost went permanently dark, Spax Orion made an interesting discovery. You can use a local instance of ollama and pipe it to Active NPC. You can only pick ONE NPC who will provide the responses to your local running model. You can change the system prompt to assign the type of personality you want the character to have, the only limit is the power of your server and the size of the model you are running. This script works best when placed into an object inworld but it will also work when worn on a hud. You can see it in action at xoaox.de:7000


CAVEAT: This is ideal as a personal assistant or virtual partner. It will only respond to YOU in nearby chat, it will ignore text from your objects, NPCs and visitors. The AI needs some improvements before it can be made to work with ActiveNPC in it's entirety, this is critical when working with advanced language models, this would be perfect for a tiny LLM designed for storytelling. While the smaller models compute replies faster they are not as robust.


Someone was kind enough to remind me that those in midlife crisis can easily overlook the obvious. For that I express my gratitude. Today, I am dropping the experimental version where the Active NPC will reply to ALL AVATARS, including NPCs. You better have a super fast GPU because people might spam your AI with communication requests LOL. Ozone's Npcs are programmed to ignore conversation from longer than 5m away. See Spax Orion's OSSL RACTER script in this library for an idea of how to accomplish that.

Use one script or the other... NOT BOTH.


Added by: Brettson
Last Update: 5 days ago
Project Category: Utilities
👍 4 like

Code

File name Added By Last Updated Actions
ANPC+Ollama - EXPERIMENTAL Brettson 6 days ago View
ANPC+Ollama.ossl Brettson 7 days ago View


Comments

No comments yet