I’m a big fan of using agentic AI to make my life easier. I run n8n on my NAS for day-to-day admin, and use both Claude Code and OpenAI Codex for a lot of development. I’ve started playing a bit more with Claude Cowork, and although really powerful and much easier to plug and play than n8n, the big problem here is it rapidly hits rate limits unless you pay £££ per month.
The other option to avoid this is to use a different AI model for the bulk of the LLM calls, saving the more powerful cloud based models for the really tricky questions. You can run models locally using Ollama, and although the lightweight opensource OpenAI model is a great alternative when coding on my 3090-equipped workstation it runs like an absolute dog on my MacBook, so isn’t much use when at work.
However, there is an alternative. All M-series Macs actually have on-device LLMs already which is wrapped up as part of “Apple Intelligence”. The tricky problem is accessing it.
Recently this problem has been solved. Techopolis open-sourced an app called Perspective Server, which exposes this Foundation Model through a standard API that Ollama or Code can plug straight into. It’s also a trivial install - download the DMG, copy it to Applications, and start it.
But now they’ve gone one better. Today they’ve released Perspective Intelligence Web, and on device locally hosted chatbot that interfaces with Perspective Server. Chat directly with Apple Intelligence!
Their install script didn’t work for me even after installing bash, so this is what did:
Firstly, ensure you have homebrew installed, and get the required packages:
brew install git node
Then clone the repo, and install dependencies:
git clone https://github.com/Techopolis/perspective-intelligence-web-community.git
cd perspective-intelligence-web-community/next-app
npm install
Now we need to set up the environment. First create the env file, then we’ll update it:
cp .env.local.example .env.local 2>/dev/null || true
open -e .env.local
We need to add DATABASE_URL to this. I run Postgres on my Macbook on a SSD, so for this I decided to just use Neon’s free tier instead so I didn’t have to worry about if the SSD was attached or not.
Create a Neon project and database (I used AWS and London). Then in the Neon dashboard, click Connect, and it will show you a ready-made connection string you can copy (it starts with postgresql://neondb_owner:, and make sure you click the show password tickbox) into the DATABASE_URL spot. Wrap it in “quotation marks” to ensure it gets handled properly.
You’ll also need to fill NEXTAUTH_SECRET= with a value you generate by calling
openssl rand -base64 32
I left the Apple authentication section blank as it’ll be just me using it, but feel free to also fill that in.
Next you also need to update the the config file so drizzle-kit will actually find Next’s .env.local:
open drizzle.config.ts
Then add the following to the top:
import * as dotenv from "dotenv";
dotenv.config({ path: ".env.local" });
Finally, run the rest of the installation as per the instructions on github:
npm install
npx drizzle-kit push
npm run dev
This should confirm that it’s running, and available by default at http://localhost:3000:
% npm run dev
> perspective-intelligence-web@0.1.0 dev
> next dev
▲ Next.js 16.1.6 (Turbopack)
- Local: http://localhost:3000
- Network: http://192.168.1.41:3000
- Environments: .env.local
✓ Starting...
⚠ The "middleware" file convention is deprecated. Please use "proxy" instead. Learn more: https://nextjs.org/docs/messages/middleware-to-proxy
✓ Ready in 872ms
Browse to the link, and get your own fairly rapid and perfectly adequate for most problems chatbot:
