On Android Stylus APIs
Why some drawing apps feel like paper and others feel like mud
Comment
by u/CopperCoinStudio from discussion
in daylightcomputer
Dear u/CopperCoinStudio,
Love that you want to port your mindfulness drawing game to the Daylight tablet! Making stylus behavior work really well can be quite challenging, or at least that's what I've heard from the Daylight software team. The following is not what they said but what little I picked up mixed in with a whole bunch of AI-assisted research.
Take it as a first draft of a map made by an amateur, not a territory. If something's wrong, tell me. I'd rather be corrected than leave bad directions up.
The Good News First
You're asking at the right moment. The Jetpack Ink API went stable two weeks ago—1.0.0 on December 17, 2025. What was alpha territory is now production-ready.
And it's not experimental technology. The google/ink repository—the C++ core that powers the Jetpack Ink API—explicitly describes itself as "a rewrite of the stroke generation portion of the Sketchology code," Google's internal inking codebase. According to various sources, this is the same infrastructure behind Chrome Canvas (launched 2018), Jamboard, Chrome OS PDF annotation, and more recently Circle to Search. Years of internal use before it became a public API.
The mud feeling? It's solvable. The platform finally has the tools.
The Three Layers
Android's stylus system has three components that work together:
MotionEvent is the raw input layer. Everything the hardware reports: pressure, tilt, orientation, hover. This is where your data comes from.
The Ink API handles stroke rendering with low-latency graphics built in. Front-buffered rendering, motion prediction, stroke geometry—all packaged together.
MotionPredictor is an algorithm that guesses where the stylus is going, letting you draw ahead of reality. The stroke appears to flow from the tip rather than trailing behind it.
If you're starting fresh, use the Ink API. It handles the hard parts. If you're integrating into existing architecture, understand all three and apply them where they fit.
Why Latency Creates Mud
You're fighting against accumulated delays:
Touch sampling (how often the hardware checks for input), processing time (your code turning input into geometry), the rendering pipeline (getting pixels to screen), and display refresh (the screen actually updating).
Add these up naively and you can easily hit 30ms or more of delay. That's perceptible. That's the mud.
The commonly cited target is under 20ms perceived latency. Below that, most people experience the stroke as immediate.
The Critical Detail Everyone Misses
Android batches input events for efficiency. A single MotionEvent may contain multiple samples—historical data since the last event. If you only process the current point, you're dropping samples. Your strokes will look jaggy, especially during fast movements.
Always process the history:
fun onTouchEvent(event: MotionEvent): Boolean {
for (i in 0 until event.historySize) {
val x = event.getHistoricalX(i)
val y = event.getHistoricalY(i)
val pressure = event.getHistoricalPressure(i)
processPoint(x, y, pressure)
}
processPoint(event.x, event.y, event.pressure)
return true
}
The documentation and tutorials I found all emphasize this point. Worth checking first.
Pressure: Where Drawing Becomes Expressive
The hardware reports pressure as a float from 0.0 to 1.0. How you map this to stroke width matters.
A linear mapping—more pressure equals proportionally wider stroke—is the obvious approach. But the guidance I found suggests artists often prefer a curve: light pressure producing subtle changes, heavy pressure producing dramatic thickness. The response should feel musical.
val baseWidth = 4f
val maxWidth = 20f
val strokeWidth = baseWidth + (pressure * (maxWidth - baseWidth))
That's the linear version. Experiment with exponential or sigmoid curves. Your users' hands will tell you when it's right.
Front-Buffered Rendering: Trading Safety for Speed
Standard Android rendering is double-buffered. You draw to a back buffer while the front buffer displays. When you're done, they swap. This prevents tearing but adds a frame of latency.
For stylus input, Google introduced front-buffered rendering through the androidx.graphics library. For small, localized updates—like a stroke segment—you skip the double-buffering and draw directly to what's on screen.
The pattern: render current stroke segments to the front buffer for immediate feedback. When the stroke completes (stylus lifts), commit to the double-buffered layer for persistence.
val renderer = GLFrontBufferedRenderer(surfaceView, callbacks)
// While drawing: immediate feedback
renderer.renderFrontBufferedLayer(strokePoint)
// On stylus up: persist to canvas
renderer.commit()
Motion Prediction: Drawing the Future
Even with front-buffered rendering, you're reacting to where the stylus was. MotionPredictor estimates where it's going using a Kalman filter—velocity, acceleration, pressure change, trajectory.
You render predicted points as if they were real, then correct when actual data arrives. The predictions will be wrong. Your rendering must replace predicted segments gracefully, without visual discontinuity.
val predictor = MotionEventPredictor.newInstance(view)
fun onTouchEvent(event: MotionEvent) {
predictor.record(event)
val predicted = predictor.predict()
renderStroke(event)
if (predicted != null) {
renderPredictedStroke(predicted)
}
}
This is part of what makes apps feel responsive—the stroke flowing from the tip rather than trailing behind.
The Ink API: Why You Should Probably Just Use This
The Jetpack Ink API bundles all three layers into one coherent package:
- ink-authoring — real-time stroke input handling
- ink-brush — customizable brush definitions
- ink-geometry — stroke representation and manipulation
- ink-rendering — optimized rendering with front-buffering
- ink-strokes — stroke storage and serialization
val brush = Brush.createWithColorIntArgb(
family = StockBrushes.markerLatest,
colorIntArgb = Color.BLACK,
size = 10f,
epsilon = 0.1f
)
InkCanvas(
modifier = Modifier.fillMaxSize(),
brush = brush,
onStrokeCompleted = { stroke ->
strokes.add(stroke)
}
)
You define brushes. You respond to completed strokes. The API handles the frame-by-frame complexity. For a mindfulness drawing app—where the feel of the stroke matters more than baroque feature sets—this is probably where to focus your energy.
What Concepts Did (And Why You Don't Have To)
You asked about Concepts specifically. They achieved paper-like latency years before these APIs existed. Their public documentation describes the approach: a "vector-based" engine where "every stroke is an editable vector," combined with what they call "vector-hybrid brushes." This lets them:
- Render strokes as vectors (resolution-independent, infinitely zoomable)
- Infinite canvas with level-of-detail rendering
- Build custom prediction algorithms (before MotionPredictor existed)
- Explicit high-refresh-rate optimization
They've been building this since 2012 on iOS, 2018 on Android—years of focused engineering. The Ink API now provides comparable low-latency rendering infrastructure out of the box. You don't need to build what they built to achieve similar responsiveness.
Hardware Reality
You own the hardware you're building for—that's an advantage. Test on it early and often. The emulator lies about touch input, and stylus feel is one of those things you can only judge with the actual device in hand.
A few things the APIs can't abstract:
- Touch sample rate varies by device. Design for at least 120Hz.
- Daylight uses Wacom EMR—battery-less pens powered inductively by the tablet, reporting full pressure (4096 levels), tilt, and hover. This is professional-grade input. Cheap capacitive styluses, by contrast, just mimic finger touches with no real pressure data.
- Palm rejection is usually handled by the system. Trust TOOL_TYPE.
The Mistakes That Create Mud
Based on the documentation and developer discussions I've read:
- Ignoring historical events. Always iterate through getHistoricalX/Y/Pressure. Frequently cited as a common source of jaggy strokes.
- Linear pressure mapping. Consider nonlinear response curves.
- Blocking the UI thread. Keep stroke processing fast.
- Full-canvas invalidation. Only redraw what changed.
- Ignoring ACTION_CANCEL. The system sends this when a dialog appears mid-stroke.
Where to Start
Porting an existing app:
- Audit your MotionEvent handling. Are you processing history?
- Add MotionPredictor. Highest impact, lowest effort.
- Consider GLFrontBufferedRenderer if you're already using OpenGL.
Starting fresh:
- Use the Ink API. It's stable now.
- Define brush behaviors through the Brush API.
- Focus on what makes your app unique.
A mindfulness drawing game is a great fit for the Daylight, would love to try it out when you have an alpha version. I hope this map helps you get it feeling right enough to have the community try it out and give you feedback.
If you find errors in what I wrote here, please do write back. I'd rather fix them.
If you want to go deeper on Daylight-specific development, I'm happy to help connect you with the team—they're accessible and genuinely interested in developers building for the platform. A couple of other developers worth learning from: u/mattdevlog (Matt Thompson, creator of paravel.ai) has been building specifically for the Daylight and is active in both Daylight subreddits. And the tldraw team has got their collaborative whiteboard working nicely on the device—worth looking at how they've approached stylus input.
Moritz
Questions? Write back.