I recently attended droidcon in New York, and it was great to convene with hundreds of Android developers. I wanted to share my takeaways and reflections. The talks haven’t been uploaded yet, but here’s what the conference schedule looked like.
Non-AI reflections — Android as a mature ecosystem
2025 felt like the first droidcon in years without groundbreaking changes in Android libraries or tooling. (Previous years were all about KMP, Compose, etc). While I don’t expect this stability to last long, it was nice to watch deeper dives into the inner workings of increasingly mature frameworks like Compose and Coroutines.
Since the conference happened soon after Google I/O and Kotlin Conf, several talks echoed release announcements from both events. For example, “Mastering Text Input in Compose” was nearly identical to this Google I/O talk, demoing InputTransformation, SecureTextField, and rich content copy-and-paste.
“Playing with Experimental Kotlin Features” was a fun one that explored preview features in Kotlin 2.2, including context parameters and nested typealiases. My main learnings were:
- The Kotlin website has a Language features and proposals page that documents KEEPs (aka Kotlin Evolution and Enhancement Process) (aka new language feature proposals)
- The website page isn’t comprehensive because some features bypass the KEEP process. Another way to see experimental features is searching Kotlin’s compiler flags directly, by running:
kotlinc -X | grep -i "experimental"

snippet from kotlinc -X | grep -i “experimental” output
The “Future of Dependency Injection” panel focused heavily on Metro, a new KMP compile-time dependency injection framework that draws inspiration from Dagger, Anvil, and Kotlin-Inject. It’s currently only on version 0.4.0, but is getting a lot of support within the Android community, so I expect it’ll be the topic of many future talks.
It’s nice to have a break from Android-specific groundbreaking changes, and…turn to AI instead?
AI reflections — hype vs. reality
I’m both excited by the proliferation of AI coding tools, and daunted by how much it changes software engineering and the world. It was refreshing to discuss AI at a developer-driven conference and learn that many Android engineers felt similarly. At many companies, leadership is buzzing with AI hype, making it challenging to cut through the hype to understand what AI can actually do. Staff+ engineers with comparable years of industry experience offered more nuanced takes.
In the “Future of Android” panel keynote, a panelist pointed out that much of the hype is based on vanity metrics. For example, “70% of code at X is generated by AI” isn’t meaningful because even before AI, we generated a large % of code with IDEs’ autocomplete features. That said, AI still offers genuine productivity advantages.
Some ideas I plan to incorporate into my own workflow:
- Tackle the backlog: Many engineers have a backlog of non-urgent tasks/bugs we’ll realistically never get to. We could spend a few minutes prompting an agent to do them. If it succeeds, great! If it doesn’t, the task stays in the backlog.
- Meeting prep assistance: Ask AI to create meeting prep docs, using past notes and relevant documentation.
- Stay current with models: Make sure you’re using the latest and greatest. Claude-4-sonnet and gpt-4.1 seem to be the best options for most Android coding tasks currently (Claude-4-opus and o3 are more powerful, but slower and expensive), but new models are constantly being released. Cursor has a helpful decision tree visualization for choosing models.
The tools that came up often were Cursor (AI-powered IDE), Goose (open source AI agent), and GitHub Copilot. I also think it’s worth getting familiar with non-Android Studio IDEs, because most AI-powered IDEs are VS Code forks, and KMP plus the industry-wide pressure to do more with less might mean using Xcode. On the flip side, I learned that Firebender — a new IDE marketed as “Cursor for Android Studio” — exists.
Many attendees spoke of the ambiguity of learning AI. New software frameworks usually have a visible path to mastery: learn basic APIs, learn more complicated usages, read (or even contribute to) source code. While there will always be quirks and bugs even the maintainers aren’t aware of, the codebase is fundamentally finite.
The AI landscape is vast and constantly shifting. A few resources I’ve found helpful for feeling more grounded:
- The Engineering Enablement podcast’s “Obstacles preventing GenAI adoption” episode (if you’re not a podcast person, the website’s notes are excellent too!), which covers non-vanity AI adoption stats and concrete tips for lowering obstacles for engineers
- OpenAI’s prompting guide
Job Offers
Spicier AI reflections
Another concern that surfaced a few times at droidcon: will AI mark the end of craftsmanship in software engineering? I have no answer to this 🤷but I’m tangentially involved in writing and visual arts communities, and I’d love to see more dialogue between creators in different fields.
Engineers are generally aware that a lot of AI-generated code is slop, but recognizing slop comes with experience. I can often tell if AI-generated Kotlin code is smelly, but I’m more nose-blind around Python and Javascript. And professional artist friends are much quicker to notice when digital art is AI-generated than I am.
If we want to use AI to generate other creative work — writing, art, film — , it’s on us to speak to creators to better understand other fields, consume more work created by humans, refine our tastes, learn about AI concerns in their fields, and share our own perspectives. My personal philosophy around copying specific styles depends on whether the creator has given permission. For example, copying Lewis Caroll feels fine, because his works are public domain. Copying Scarlett Johansson’s voice from Her feels more questionable, because she didn’t agree to it, plus it’s kinda weird!
Final thoughts
Shoutout to the organizers and speakers (including many of my coworkers 🙂) at Droidcon NYC for the insightful talks and community-building. Looking forward to 2026!
And thanks to Russell for his valuable editing and feedback.
This article was previously published on proandroiddev.com.


