Re: Hybrid AI with Swift and Firebase, and a font you can hear
Hey everyone!
I’ve been having a lot of fun with MCP servers these past couple of weeks. You might remember I created an MCP server for Keynote a while ago (here it is, creating a slide deck for me), and after I gave my talk “Beyond Prompts: Building Intelligent Applications With Genkit and the Model Context Protocol” at AI_dev in Amsterdam last week, I sat down to vibe code an MCP server for Sofia, my Second Brain app. This has already been a massive time saver for curating this newsletter - I can now task Gemini CLI (or Cursor) with pulling down all curated links from my knowledge base, and generate the skeleton of the newsletter for me. Don’t worry - all the words are still written by me, and I personally select all the links.
Speaking of (code) generators, I had the opportunity to attend CocoaHeads Hamburg this week, where Sören Gade gave a talk about how they use code generation at LichtBlick to power their network layer (using Apollo GraphQL), as well as their analytics (using a home-grown generator called “Trakken”), and their design system (using a generator called “Dissen”). Sören later shared that his past experience with cross-platform mobile development SDKs wasn’t too positive, and it appeared to me that their generative approach allows them to use the best of both worlds - native look & feel (and performance), but shared design tokens, analytics IDs, and network layer.
Meetups and conferences are a great way to meet other developers - I’m always inspired by hearing their stories and learing from their experiences.
I’m including an updated list of conferences for 2025 in this newsletter. If you either can’t afford being away from work during the week, or if attending a multi-day conference is too pricey, here are some meetups to consider:
Personally, I am really looking forward to SwiftLeeds in October. Adam Rush and his team know how to put together a great event, and this year’s lineup looks fantastic!
I also saw that “The World’s Northernmost Apple Developers Conference” (Arctic Conference) will happen again next year - keep your eyes peeled!
I was inspired by Majid’s blog post (see further down below in this newsletter issue), and decided to implement a hybrid AI approach in Sofia, my Second Brain app. It uses Apple’s Foundation Model framework for on-device inference, and automatically falls back to Gemini 2.5 Flash Lite (using Firebase AI Logic) if local inference isn’t possible (for example if the model isn’t available, e.g. on an older phone, or - and this actually happens quite frequently - if the full text of the article the user wants to summarise is too long for the 4096 token window that Apple’s Foundation Models support).
Next week, I will look into using Remote Config to remotely control which model is used - this way, you will be able to prioritize using a cloud model over a local model, or the other way round, or specify which model version you want to use, or even adjust the prompt you’re using. Stay tuned!
One of the core tenets of object-oriented programming is encapsulation. Many languages have the concept of properties, but I think Swift has one of the most ergonomic implementations.
Antoine looks at computed properties, and provides a good overview of what they do and when you should (or shouldn’t) use them.
Even if you’ve been using computed properties for longer than you can remember, you should read this post - it includes a couple of things that you might not have known about…!
But of course, there is a lot more nuance. Sarah walks through the key capabilities and also explores more advanced topics, such as using JavaScript (for example, to collect all headings in a page).
Many apps have built-in search features. But did you know you can feed two birds with one scone and implement powerful search for your app and increase visibility of your app to the user at the same time?
Core Spotlight is the answer, and Natalia provides an excellent overview of how it works and how you can implement it in your app.
In Firebase After Hours #15, we looked into Firebase AI Logic - what it is, how you can use it, and why it is more secure than calling LLMs directly from your mobile clients. We implemented an AI-powered DnD character generator using structured generation (aka JSON mode), as well as Imagen. The engineering team also gave a sneak preview of the new macros they’re working on.
Google’s Gemini 2.5 Flash Image, or Nano Banana, has been all the rage the past couple of days. If you’d like to use it in your own apps, check out this article by Patrick. It walks you through everything you need to know to use Nano Banana in your Python and TypeScript apps.
In this article, Martin Fowler shares his thoughts on the current state of LLMs and their impact on software development in general.
One thought that stood out to me was “what is the future of programming”, and whether it’s even worth entering this field as a junior. Martin admits that it’s too early to say, and that we’re still in the experimentation phase.
In a previous issue of this newsletter, I wrote about Simon Willison’s take on LLMs and GenAI, and how he thinks that these tools allow him to run circles around people who aren’t using them. In this article, Martin Fowler shares his thoughts on the matter, and it seems Martin is rather hesitant when it comes to using LLMs.
I think it’s always good to hear different opinions to form your own opinion. Both Martin and Simon are respected and well-seasoned experts of their trade, and I can only recommend taking their perspectives into consideration.
If you’ve ever noticed that your favourite agentic IDE struggles to write Swift code, it might not be its fault!
Apple’s Developer documentation is locked behind JavaScript, turning it into a giant black box for LLMs.
sosumi.ai is a service that translates Apple Developer documentation pages into AI-friendly Markdown.
Their approach is actually pretty interesting:
Content is fetched transiently and may be cached briefly to improve performance (approximately 30 minutes). No permanent archives are maintained. All copyrights and other rights in the underlying content remain with Apple Inc. Each page links back to the original source.
If you think this means the service would be slow, you’ll be pleasantly surprised - they deliver sub-second response times.
Apple’s new Foundation Models framework provides type-safe APIs for using Apple Intelligence models in your apps.
But what if they’re not available?
In this article, Majid shows how he uses the facade pattern to encapsulate AI features in his app. This not only makes the code easier to reason about, it also give him an easy way to handle situations in which the Foundation Model framework is not available.
In Majid’s case, he can just return the input, but that’s a pretty niche use case.
In case you really want to generate a response, you might want to call a cloud-based model.
If you’re curious how that works, tune in to this week’s livestream, in which I will implement a strategy for handling this.
MCP servers allow you to connect LLMs to all sorts of APIs, apps, and other data sources. One of the cool things about the Model Context Protocol (MCP) is that you can use your favourite language to implement MCP servers.
Here is Artem Novichkov, explaining how to build MCP servers in Swift.
If you’re curious, I implemented an MCP server for Keynote (in Swift) a while ago on my livestream. Check out the recording here.
I don’t know how useful this is, but this font surely looks fun - at least for people who remember dot matrix printers. As Andy Levy perfectly puts it - you can hear this font 😂