- JetpackCompose.app Dispatch
- Posts
- JetpackCompose.app's Dispatch Issue #13
JetpackCompose.app's Dispatch Issue #13
💌 In today's issue, we talk about 🥶 chilly mobile hiring trends, the best Kotlin speaking LLMs 🗣️, another project in the Google graveyard 🪦, shadcn/ui for Android ✨ and learn how to trust time ⏰
Can a mobile team make their releases boring?
Squarespace’s Unfold team did! Read what their non-eventful mobile releases are like and how they came about, dig into how eventful they used to be, and hear why having uneventful releases is a superpower.
GM Friends. This is JetpackCompose.app’s Dispatch, your favorite cozy corner of the internet for all things Android Development and Jetpack Compose.
This is Issue # 13 so grab your coffee (or tea, we don't judge!), settle in, and let's dive into some cool stuff happening in our world 🤖
🤔 Interesting tid-bits
Alright, let's kick things off with a few nuggets that caught my eye recently:
Which LLM Speaks Kotlin Best?
Ever find yourself recently arguing with an LLM about Kotlin syntax? 🤬 Well, the folks at JetBrains Research decided to put the big players to the test. They pitted models Open and Anthropic against the up-and-coming DeepSeek-R1 using benchmarks like KotlinHumanEval and a new one called Kotlin_QA (which focuses on real-world Kotlin questions, many sourced from the community Slack!). The results? Surprisingly, DeepSeek-R1 actually topped the charts on the Kotlin_QA benchmark, showing strong reasoning for open-ended questions. OpenAI's models weren't far behind, absolutely crushing the code-gen focused KotlinHumanEval
with a 91% success rate! 🤯 However, there's a catch: DeepSeek-R1 is currently much slower than the OpenAI models, making it less practical for real-time assistance. The takeaway? LLMs are getting seriously good at Kotlin, great for explanations and basic code-gen, but they still make mistakes (like miscounting 'r's in "strawberry" 🍓 - classic LLM!) and might miss imports. So, use 'em, but trust... and verify! Check out the full breakdown on the JetBrains blog.
Model Name | Success Rate (%) |
OpenAI o1 | 91.93% |
DeepSeek-R1 | 88.82% |
OpenAI o1-preview | 88.82% |
OpenAI o3-mini | 86.96% |
OpenAI o1-mini | 86.34% |
Google Gemini 2.0 Flash | 83.23% |
Anthropic Claude 3.5 Sonnet | 80.12% |
OpenAI GPT-4o | 80.12% |
OpenAI GPT-4o mini | 77.02% |
Top models on the KotlinHumanEval benchmark (assessment date: January 2025)
Generators | Average assessment |
DeepSeek-R1 | 8.79 |
OpenAI o3-mini | 8.70 |
OpenAI o1 | 8.62 |
OpenAI o1-preview | 8.60 |
OpenAI o1-mini | 8.40 |
OpenAI GPT-4o 11.20.2024 | 8.40 |
Anthropic Claude 3.5 Sonnet | 8.38 |
OpenAI GPT-4o 08.06.2024 | 8.18 |
Anthropic Claude 3.5 Haiku | 8.01 |
Google Gemini 2.0 Flash | 7.74 |
Google Gemini 1.5 Pro | 7.45 |
OpenAI GPT-4o mini | 7.26 |
Google Gemini 1.5 Flash | 6.51 |
Google Gemini Ultra | 6.50 |
Anthropic Claude 3 Haiku | 6.50 |
Kotlin_QA leaderboard (assessment date: January 2025)
Note: As these assessments were done in Jan 2025, it’s noticeably missing two really important contenders - Claude Sonnet 3.7 and Google’s Gemini 2.5 Pro. Both of these models will most probably be in the top 5 if these evaluations were to run again today. Just something to keep in mind in this ever changing foundational model landscape.
🥶 Native Mobile Hiring: A Chill in the Air?
Gergely Orosz, known for his insights on the tech job market, recently revisited a question he asked two years ago: Is native mobile development declining? His conversations with recruiters suggest a noticeable dip, particularly for early-stage, VC-funded startups in the US, who seem to be leaning heavily towards cross-platform solutions like React Native.
Asked this question two years ago.
The answer now (based on talking with recruiters) seems to be a definite "yes, there is a decline. Early-stage startups overwhelmingly choose cross-platform mobile. Specifically, VC-funded ones in the US bias strongly to React Native"
— Gergely Orosz (@GergelyOrosz)
9:52 AM • Feb 17, 2025
Why? Often, it boils down to resources. Startups need to move fast and reach multiple platforms with a smaller team. It's rarely the first choice if resources were unlimited, but it's a pragmatic one. My personal take? While this trend is real, the rise of LLMs could be a counter-force. If AI significantly boosts native developer productivity (and let's be honest, it's already helping!), the resource argument for cross-platform might weaken. We might see companies finding it easier to launch native apps from day one, potentially slowing the growth of frameworks like Flutter and React Native, especially if their adoption was primarily driven by these resource gaps. It's a fascinating dynamic that we’ll have to wait and watch because there are many other variables at play and only time will tell!
👋 So long, Relay!
Remember Relay? Google's ambitious project from the Material Design team aimed at bridging the gap between Figma designs and Jetpack Compose code. It promised a smoother designer-developer handoff by translating Figma components into Compose code via an intermediate format. Well, Google recently announced they're sunsetting Relay on April 30, 2025.

👋🏻 It’s been real (tbh, not)
While it's always sad to see a tool go, especially one tackling such a thorny problem, the writing might have been on the wall. My personal experience with it leads me to believe that this outcome was partly because Relay was conceived in a pre-LLM world. Its approach, relying on a rigid intermediate format and compiler-like translation, meant your Figma structure had a huge impact on the output, often requiring specific layering that designers might not naturally use. LLMs, with their ability to understand context and generate code more dynamically, seem much better suited to this messy, real-world problem. I suspect this is a big reason why Relay, in its current form, didn't quite hit the mark. But fear not! The Figma-to-code dream isn't dead. Tools like Anima and Builder.io are already leveraging AI for this, and I'm betting we'll see even smarter, LLM-first solutions emerge. The quest for the decades old (is anyone here old enough to remember Adobe Dreamweaver and Microsoft Frontpage?) holy grail dream to go from a visual tool to production ready code continues! ✨

📣 Personal Update
If you remember, Issue #11 of this newsletter was an absolute banger - I talked to some OG Android experts about their hot-takes, predictions, wishes, and advice on how they expect the Android ecosystem to change and how one should prepare for it.
Everyone was super generous with their opinions and it resulted in a lot of follow on conversations within the Android community. It was also the topic that we anchored on when I was invited to hang out with my good friend Kaushik Gopal to record the latest episode of the very popular Fragmented Podcast.
I’m delighted to see that it really resonated within the Android community and is now going to be topic of the Keynote for Droidcon NYC happening in June this year 🥹 It takes a lot of time to put each newsletter edition together so things like these go a very long way in keeping me motivated.
The folks at Droidcon were kind enough to offer my readers a discount code so if you plan to attend Droidcon NYC this year (and I insist you do so that we can hang out together!), use this link to get a 25% discount for being a subscriber of the best Android newsletter in the business 😂

Started with a spicy newsletter topic. Accidentally scheduled a keynote. Classic.

😆 Dev Delight

But at least he has a sick workspace to spill his coffee on ☕️

Yeah right…….

🕵🏻♂️ Insider Insight
We've all been there, right? You need an accurate timestamp for something critical – logging a transaction, enforcing a limited-time offer, synchronizing game states, maybe even just recording when an offline document was edited. So you reach for System.currentTimeMillis()
or Instant.now()
. Simple.
Except... it's not always reliable, is it? 🙈 What if the user manually changed their device clock? Maybe they're trying to cheat your game's energy system or snag that expired coupon. Or maybe the device clock has just drifted due to temperature changes, battery level, or being in doze mode for too long. Suddenly, your "reliable" timestamp isn't so reliable anymore. This can lead to data inconsistencies, security holes, messed-up scheduling, and general chaos.
For years, the common workaround was hitting an NTP (Network Time Protocol) server. Fun fact - Google exposes a public NTP server that can be accessed at time.google.com
. But doing that every time you need the current time? That's a lot of network requests, battery drain, and potential latency. There had to be a better way.
And now there is! Google recently introduced the TrustedTime API via Google Play services. This nifty API aims to give you a trustworthy timestamp, independent of the device's local clock settings.
How does it work? It cleverly leverages Google's own highly accurate time servers. The TrustedTimeClient
periodically syncs with these servers in the background. When you ask for the time, it doesn't necessarily make a network call. Instead, it uses its last sync point and calculates the current time based on the device's internal clock (specifically, the monotonic clock, which isn't affected by wall-clock changes). Crucially, it also models the drift of the device's clock and provides an error estimate with each timestamp! 🤯 You’ve probably heard that times and timezones in programming can be very tricky and this section of the newsletter should give you a glimpse at why that’s the case 😅 You can learn more about how to leverage the API in this launch blogpost.
Important Notes:
Availability: Works on Android 5.0+ with Play Services. Needs
play-services-time:16.0.1+
.Internet Required (Initially): Needs an internet connection sometime after boot to perform its first sync. If it hasn't synced, the API methods will return
null
.Not Tamper-Proof: While it makes casual time manipulation ineffective, determined attackers with advanced techniques might still find ways. Don't treat it as a foolproof security measure on its own.This is a fantastic addition to our toolkit, solving a long-standing annoyance for many critical app functions. Go check out the official docs and start trusting time again!

🔦 Community Spotlight
Okay friends, let's talk UI components. If you've dabbled in the web development world, particularly with React, you've likely heard of shadcn/ui. It's taken the web community by storm, but it's not a traditional component library like Material UI or Chakra UI.
What's the difference? Instead of installing a package full of pre-built components (<Button>
, <Card>
, etc.) that you import and use, shadcn/ui
gives you beautifully designed, accessible components that you copy and paste directly into your project. You install a CLI tool, run a command like npx shadcn-ui@latest add button
, and boom – the source code for the button component lands right in your codebase (e.g., src/components/ui/button.tsx
).
Why is this cool?
Full Ownership: The code is yours. You can tweak it, restyle it, refactor it – anything you want, without fighting library opinions or waiting for updates.
No Runtime Overhead: It's not a dependency adding bloat. It's just your code.
Customizable: Built on Tailwind CSS, it's designed for easy customization.
Now, wouldn't something like that be awesome for Jetpack Compose?
Enter Lumo UI by nomanr! 🤩 I’m assuming it’s inspired directly by shadcn/ui
as Lumo UI brings this same philosophy to the Android world. It's a Gradle plugin that provides a CLI to generate Compose UI components right into your project.
You add the plugin, sync Gradle, and then you can use commands to add components like buttons, cards, dialogs, etc., to your codebase.
./gradlew lumo --add Button
I love seeing ideas cross-pollinate between ecosystems, and this shadcn/ui
-style approach feels like a natural fit for Compose's flexibility. Definitely one to watch (and maybe contribute to!).

🦄 How you can help?
👉🏻 If you enjoyed this newsletter, I'd greatly appreciate it if you could share it with your peers and friends. Your support helps me reach more Android developers and grow our community. Writing this newsletter takes time and effort, and it truly makes my day when I see it shared within your network. Consider tweeting about it or sharing it in your team's Slack workspace. Thank you!
👉🏻 If you find this newsletter or any of my other free websites or open source contributions useful, consider supporting my work 🙏🏻
👉🏻 Get my Maker OS Notion Template that I use to run my life on Notion. If you are into productivity, you will appreciate it! Here’s a quick preview of what it looks like—
👂 Let me hear it!
What’d you think of this email? Tap your choice below and lemme hear it 👇
🚀🚀🚀🚀🚀 AMAZING-LOVED IT!
🚀🚀🚀 PRETTY GOOD!
On that note, here’s hoping that your bugs are minor and your compilations are error free,
—

Reply