Can an old MacBook with an M1 chip effectively run a local AI model? We put it to the test
- David Ciran
- Jul 8
- 3 min read

I had an idea... Would it be possible to put together an AI system that runs 100% only on my laptop and still does something useful?
It's an ideal scenario because my data doesn't travel somewhere on the internet during processing. On the other hand, a regular MacBook is far from matching the performance of dedicated servers with large memory - so I was curious how it would turn out.
The Challenge: Local AI Model Team for Invoice Processing
I decided to build an AI agent system that would automatically read incoming emails. If an invoice from a supplier appears in them, it would check all requirements, verify the order in the database, register the invoice, and finally respond - everything controlled by a local LLM model.
This automatic system was supposed to run locally on my MacBook Pro with M1 processor and basic 16GB memory. I had no idea how it would work out with a 4-year-old processor and basic memory. I must say I was surprised in the end.
Architecture: Team of AI Agents
One big query wouldn't work, so I divided the task among agents of our new AI team. The result looked like a small team in an office:
Team Coordinator - team leader who delegates tasks
Email Detective - decides whether incoming email is an invoice and finds important information
Order Validator - verifies supplier in database and checks order number
Record Writer - registers invoice in database
Email Responder - responds via email to sender
Technical Solution
I used our own Coretex library, which allows programmers to easily define agents, their connections, and tools. For local model execution, I used LM Studio with Google's Gemma-3-12B model - a 12 billion parameter model designed for function calling.
Thanks to the OpenAI-compatible API, the application could easily switch between cloud and local models. All communication remained on my computer.
Testing Results
Speed and Performance
Average processing time per invoice: 7 minutes
Memory usage: Up to 16GB during processing
Accuracy: 95% in my tests
Cost: $0 per invoice (after initial setup)
Realistic throughput: 8-10 invoices per hour
Stress Testing
I tested the system with various scenarios:
Emails without invoices
Invoices with wrong order numbers
Emails in different languages
Messages completely unrelated to invoices
In all cases, the system correctly identified the content type and responded appropriately.
Advantages of Local Solution
Data Privacy
The undeniable advantage is that no data leaves the laptop. Invoice details, company information, financial data - everything stays local. With cloud services, this is unrealistic and potentially problematic.
Zero Operating Costs
After initial setup, invoice processing costs are zero. No API call fees, no monthly subscriptions.
Limitations and Reality
This isn't a system for large companies with thousands of invoices daily. For such scenarios, a different approach is needed - more memory, better hardware, or hybrid solutions. But for small to medium companies with dozens of invoices daily, it's definitely a usable solution.
Development Experience
Development with the Coretex library is essentially about declaring individual agents. Most time was spent dividing agents so each fulfilled their precisely defined task even on a smaller local model. What GPT can handle with approximate descriptions needs to be explained precisely to smaller models, ideally with examples.
Future Possibilities
Similar solutions can be used for other tasks:
Customer service
Document processing
Report generation
Social media content management
The latest generation of models can efficiently run on regular consumer hardware and implement solutions that weren't possible two years ago without a dedicated team of engineers.
Conclusion
I managed to prove that AI can effectively work on my laptop. It ran slowly compared to cloud solutions, it was necessary to divide the task into separate sub-tasks, but I achieved a functional result.
Two years ago, I would have said this task couldn't work on my MacBook. Today I successfully launched it. What will be possible next year?
FAQ
What hardware is needed for local AI?
At least 16GB RAM and a modern processor (M1/M2 or equivalent). More memory = better performance.
How long does processing one invoice take?
On MacBook Pro M1 with Gemma-3-12B model about 7 minutes. It would be faster on newer hardware.
Is the system reliable?
In tests, it achieved 95% accuracy. For production use, I recommend thorough testing on your own data.
How much does it cost?
After initial setup, costs are zero. No API fees or subscriptions.
Can other models be used?
Yes, the system supports various local models through LM Studio or similar tools.
Comments