Posted
over 6 years
ago
After quite a bit of time far from the blog, I am back around here.
The biggest reason for this silence was that this was taking a lot of my time, but I had almost no positive feedback on those posts.
Let's see if we can do better this time
Here is a small cone, to make you more happy:
|
Posted
over 6 years
ago
Current Java/Android concurrency framework leads to callback hells and blocking states because we do not have any other simple way to guarantee thread safety.
With coroutines, kotlin brings a very efficient and complete framework to manage
... [More]
concurrency in a more performant and simple way.
Coroutines way
Suspending vs blocking
Basic usage
Dispatch
Coroutine context
Notes
Callbacks and locks elimination with channels
Actors
Android lifecycle + Coroutines
Callbacks mitigation (Part 1)
Callbacks mitigation (Part 2): Retrofit
To be continued
Suspending vs blocking
Coroutines do not replace threads, it’s more like a framework to manage it.
Its philosophy is to define an execution context which allows to wait for background operations to complete, without blocking the original thread.
The goal here is to avoid callbacks and make concurrency easier.
Basic usage
Very simple first example, we launch a coroutine in the UI context. In it, we retrieve an image from the IO one, and process it back in UI.
launch(UI) {
val image = withContext(IO) { getImage() } // Get from IO context
imageView.setImageBitmap(image) // Back on main thread
}
Staightforward code, like a single threaded function. And while getImage runs in IO dedicated thread, the main thread is free for any other job!
withContext function suspends the current coroutine while its action (getImage()) is running. As soon as getImage() returns and main looper is available, coroutine resumes on main thread, and imageView.setImageBitmap(image) is called.
Second example, we now want 2 background works done to use them. We will use the async/await duo to make them run in parallel and use their result in main thread as soon as both are ready:
val job = launch(UI) {
val deferred1 = async { getFirstValue() }
val deferred2 = async(IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
job.join() // suspends current coroutine until job is done
async is similar to launch but returns a deferred (which is the Kotlin equivalent of Future), so we can get its result with await(). Called with no parameter, it runs in CommonPool context.
And once again, the main thread is free while we are waiting for our 2 values.
As you can see, launch funtion returns a Job that can be used to wait for the operation to be over, with the join() function. It works like in any other language, except that it suspends the coroutine instead of blocking the thread.
Dispatch
Dispatching is a key notion with coroutines, it’s the action to ‘jump’ from a thread to another one.
Let’s look at our current java equivalent to UI dispatching, which is runOnUiThread:
public final void runOnUiThread(Runnable action) {
if (Thread.currentThread() != mUiThread) {
mHandler.post(action); // Dispatch
} else {
action.run(); // Immediate execution
}
}
Android implementation of UI context is a dispatcher based on a Handler. So this really is the matching implementation:
launch(UI) { ... }
vs
launch(UI, CoroutineStart.UNDISPATCHED) { ... }
launch(UI) posts a Runnable in a Handler, so its code execution is not immediate.
launch(UI, CoroutineStart.UNDISPATCHED) will immediately execute its lambda expression in the current thread.
UI guarantees that coroutine is dispatched on main thread when it resumes, and it uses a Handler as the native Android implementation to post in the application event loop.
See its actual implementation:
val UI = HandlerContext(Handler(Looper.getMainLooper()), "UI")
To get a better understanding of Android dispatching, you can read this blog post on Understanding Android Core: Looper, Handler, and HandlerThread.
Coroutine context
A couroutine context (aka coroutine dispatcher) defines on which thread its code will execute, what to do in case of thrown exception and refers to a parent context, to propagate cancellation.
val job = Job()
val exceptionHandler = CoroutineExceptionHandler {
coroutineContext, throwable -> whatever(throwable)
}
launch(CommonPool+exceptionHandler, parent = job) { ... }
job.cancel() will cancel all coroutines that have job as a parent. And exceptionHandler will receive all thrown exceptions in these coroutines.
Notes
Coroutines limit Java interoperability
Confine mutablility to avoid locks
Coroutines are for threading waiting
Avoid I/O in CommonPool (and UI…)
SharedPool dispatcher coming soon to improve this
Threads are expensive, so are single-thread contexts
CommonPool is based on a ForkJoinPool on Android 5+
Coroutines can be used via Channels
CommonPool is a threadpool, aimed to be intensively used. If you perform I/O tasks in it, you could get all its threads blocked at the same time and any coroutine relying on it will be waiting.
JetBrains is adressing this issue and will probably release a shared pool guarantying that at least one thread is always free from I/O operations.
For now, it’s important to keep it free from long tasks and execute them in dedicated threads/contexts, like:
val IO = ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L,
TimeUnit.SECONDS, SynchronousQueue<Runnable>()
).asCoroutineDispatcher()
Callbacks and locks elimination with channels
Channel definition from JetBrain documentation:
A Channel is conceptually very similar to BlockingQueue. One key difference is that instead of a blocking put operation it has a suspending send, and instead of a blocking take operation it has a suspending receive.
Actors
Let’s start with a simple tool to use Channels, the Actor.
We already saw it in this blog with the DiffUtil kotlin implementation.
Actor is, yet again, very similar to Handler: we define a coroutine context (so, the tread where to execute actions) and it will execute it in a sequencial order.
Difference is it uses coroutines of course :), we can specify a capacity and executed code can suspend.
An actor will basically forward any order to a coroutine Channel. It will guaranty the order execution and confine operations in its context. It greatly helps to remove synchronize calls and keep all threads free!
protected val updateActor by lazy {
actor<Update>(UI, capacity = Channel.UNLIMITED) {
for (update in channel) when (update) {
Refresh -> updateList()
is Filter -> filter.filter(update.query)
is MediaUpdate -> updateItems(update.mediaList as List<T>)
is MediaAddition -> addMedia(update.media as T)
is MediaListAddition -> addMedia(update.mediaList as List<T>)
is MediaRemoval -> removeMedia(update.media as T)
}
}
}
// usage
suspend fun filter(query: String?) = updateActor.offer(Filter(query))
In this example, we take advantage of the Kotlin sealed classes feature to select which action to execute.
sealed class Update
object Refresh : Update()
class Filter(val query: String?) : Update()
class MediaAddition(val media: Media) : Update()
And all this actions will be queued, they will never run in parallel. That’s a good way to achieve mutability confinement.
Android lifecycle + Coroutines
(Sample shamefully copied from JetBrain’s Guide to UI programming with coroutines)
Actors can be profitable for Android UI management too, they can ease tasks cancellation and prevent overloading of the UI thread.
Let’s first declare a JobHolder interface, which will be applied to our Activity. This job will be used as a parent for any user triggered task, and will allow their cancellation.
interface JobHolder {
val job: Job
}
Let’s implement it and call job.cancel() when activity is destroyed.
class MyActivity : AppCompatActivity(), JobHolder {
override val job: Job = Job() // the instance of a Job for this activity
override fun onDestroy() {
super.onDestroy()
job.cancel() // cancel the job when activity is destroyed
}
}
A bit better, with an extension function, we can make this Job accessible from any View of a JobHolder
val View.contextJob: Job
get() = (context as? JobHolder)?.job ?: NonCancellable
We can now combine all this, setOnClick function creates a conflated actor to manage its onClick actions. In case of multiple clicks, intermediates actions will be ignored, preventing any ANR, and these actions will be executed in a context with contextJob as a parent. So it will be cancelled when Activity is destroyed 😎
fun View.setOnClick(action: suspend () -> Unit) {
// launch one actor as a parent of the context job
val eventActor = actor<Unit>(context = UI,
start = CoroutineStart.UNDISPATCHED,
capacity = Channel.CONFLATED,
parent = contextJob) {
for (event in channel) action()
}
// install a listener to activate this actor
setOnClickListener { eventActor.offer(Unit) }
}
In this example, we set the Channel as Conflated to ignore events when we have too much of them. You can change it to Channel.UNLIMITED if you prefer to queue events without missing anyone of them, but still protect your app from ANR
We also can combine coroutines and Lifecycle frameworks to automate UI tasks cancellation:
val LifecycleOwner.untilDestroy: Job get() {
val job = Job()
lifecycle.addObserver(object: LifecycleObserver {
@OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() { job.cancel() }
})
return job
}
//usage
launch(UI, parent = untilDestroy) { /* amazing things happen here! */ }
Callbacks mitigation (Part 1)
Example of a callback based API use transformed thank to a Channel.
API works like this:
requestBrowsing(url, listener) triggers the parsing of folder at url address.
The listener receives onMediaAdded(media: Media) for each discovered media in this folder.
listener.onBrowseEnd() is called once folder parsing is done.
Here is the old refresh function in VLC browser provider:
private val refreshList = mutableListOf<Media>()
fun refresh() = requestBrowsing(url, refreshListener)
private val refreshListener = object : EventListener{
override fun onMediaAdded(media: Media) {
refreshList.add(media))
}
override fun onBrowseEnd() {
val list = refreshList.toMutableList()
refreshList.clear()
launch(UI) {
dataset.value = list
parseSubDirectories()
}
}
}
How to improve this?
We create a channel, which will be initiated in refresh. Browser callbacks will now only forward media to this channel then close it.
Refresh function is now easier to understand. It sets the channel, calls the VLC browser then fills a list with the media and processes it.
Instead of the select or consumeEach functions, we can use for to wait for media and it will break once browserChannel is closed
private lateinit var browserChannel : Channel<Media>
override fun onMediaAdded(media: Media) {
browserChannel.offer(media)
}
override fun onBrowseEnd() {
browserChannel.close()
}
suspend fun refresh() {
browserChannel = Channel(Channel.UNLIMITED)
val refreshList = mutableListOf<Media>()
requestBrowsing(url)
//Suspends at every iteration to wait for media
for (media in browserChannel) refreshList.add(media)
//Channel has been closed
dataset.value = refreshList
parseSubDirectories()
}
Callbacks mitigation (Part 2): Retrofit
Second approach, we don’t use kotlinx-coroutines at all but the coroutine core framework.
Let’s see how coroutines really work!
retrofitSuspendCall function wraps a Retrofit Call request to make it a suspend function.
With suspendCoroutine we call the Call.enqueue method and suspend the coroutine. The provided callback will call continuation.resume(response) to resume the coroutine with the server response once received.
Then, we just have to bundle our Retrofit functions in retrofitSuspendCall to have a suspending functions returning the requests result.
suspend inline fun <reified T> retrofitSuspendCall(request: () -> Call<T>
) : Response<T> = suspendCoroutine { continuation ->
request.invoke().enqueue(object : Callback<T> {
override fun onResponse(call: Call<T>, response: Response<T>) {
continuation.resume(response)
}
override fun onFailure(call: Call<T>, t: Throwable) {
continuation.resumeWithException(t)
}
})
}
suspend fun browse(path: String?) = retrofitSuspendCall {
ApiClient.browse(path)
}
// usage (within UI coroutine context)
livedata.value = Repo.browse(path)
This way, the network blocking call is done in Retrofit dedicated thread, coroutine is here to wait for the response, and in-app usage couldn’t be simpler!
This implementation is inspired by gildor/kotlin-coroutines-retrofit library, which makes it ready to use.
JakeWharton/retrofit2-kotlin-coroutines-adapter is also available with another implementation, for the same result.
To be continued
Channel framework can be used in many other ways, you can look at BroadcastChannel for more powerful implementations according to your needs.
We can also create channels with the Produce function.
It can also be useful for communication between UI components: an adapter can pass click events to its Fragment/Activity via a Channel or an Actor for example.
Related readings:
Coroutines guide
Guide to UI programming with coroutines
Understanding Android Core: Looper, Handler, and HandlerThread
Presenter as a Function: Reactive MVP for Android Using Kotlin Coroutines
[Less]
|
Posted
over 6 years
ago
Current Java/Android concurrency framework leads to callback hells and blocking states because we do not have any other simple way to guarantee thread safety.
With coroutines, kotlin brings a very efficient and complete framework to manage
... [More]
concurrency in a more performant and simple way.
Coroutines way
Suspending vs blocking
Basic usage
Dispatch
Coroutine context
Scope
Notes
Callbacks and locks elimination with channels
Actors
Android lifecycle + Coroutines
Callbacks mitigation (Part 1)
Callbacks mitigation (Part 2): Retrofit
To be continued
Suspending vs blocking
Coroutines do not replace threads, it’s more like a framework to manage it.
Its philosophy is to define an execution context which allows to wait for background operations to complete, without blocking the original thread.
The goal here is to avoid callbacks and make concurrency easier.
Basic usage
Very simple first example, we launch a coroutine in the Main context (main thread). In it, we retrieve an image from the IO one, and process it back in Main.
launch(Dispatchers.Main) {
val image = withContext(Dispatchers.IO) { getImage() } // Get from IO context
imageView.setImageBitmap(image) // Back on main thread
}
Staightforward code, like a single threaded function. And while getImage runs in IO dedicated threadpool, the main thread is free for any other job!
withContext function suspends the current coroutine while its action (getImage()) is running. As soon as getImage() returns and main looper is available, coroutine resumes on main thread, and imageView.setImageBitmap(image) is called.
Second example, we now want 2 background works done to use them. We will use the async/await duo to make them run in parallel and use their result in main thread as soon as both are ready:
val job = launch(Dispatchers.Main) {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
job.join() // suspends current coroutine until job is done
async is similar to launch but returns a deferred (which is the Kotlin equivalent of Future), so we can get its result with await(). Called with no parameter, it runs in current scope default context.
And once again, the main thread is free while we are waiting for our 2 values.
As you can see, launch funtion returns a Job that can be used to wait for the operation to be over, with the join() function. It works like in any other language, except that it suspends the coroutine instead of blocking the thread.
Dispatch
Dispatching is a key notion with coroutines, it’s the action to ‘jump’ from a thread to another one.
Let’s look at our current java equivalent of Main dispatching, which is runOnUiThread:
public final void runOnUiThread(Runnable action) {
if (Thread.currentThread() != mUiThread) {
mHandler.post(action); // Dispatch
} else {
action.run(); // Immediate execution
}
}
Android implementation of Main context is a dispatcher based on a Handler. So this really is the matching implementation:
launch(Dispatchers.Main) { ... }
vs
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) { ... }
// Since kotlinx 0.26:
launch(Dispatchers.Main.immediate) { ... }
launch(Dispatchers.Main) posts a Runnable in a Handler, so its code execution is not immediate.
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) will immediately execute its lambda expression in the current thread.
Dispatchers.Main guarantees that coroutine is dispatched on main thread when it resumes, and it uses a Handler as the native Android implementation to post in the application event loop.
Its actual implementation looks like:
val Main: HandlerDispatcher = HandlerContext(mainHandler, "Main")
To get a better understanding of Android dispatching, you can read this blog post on Understanding Android Core: Looper, Handler, and HandlerThread.
Coroutine context
A couroutine context (aka coroutine dispatcher) defines on which thread its code will execute, what to do in case of thrown exception and refers to a parent context, to propagate cancellation.
val job = Job()
val exceptionHandler = CoroutineExceptionHandler {
coroutineContext, throwable -> whatever(throwable)
}
launch(Disaptchers.Default+exceptionHandler+job) { ... }
job.cancel() will cancel all coroutines that have job as a parent. And exceptionHandler will receive all thrown exceptions in these coroutines.
Scope
A coroutineScope makes errors handling easier:
If any child coroutine fails, the entire scope fails and all of children coroutines are cancelled.
In the async example, if the retrieval of a value failed, the other one continued then we would have a broken state to manage.
With a coroutineScope, useValues will be called only if both values retrieval succeeded. Also, if deferred2 fails, deferred1 is cancelled.
coroutineScope {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
We also can “scope” an entire class to define its default CoroutineContext and leverage it.
Example of a class implementing CoroutineScope:
open class ScopedViewModel : ViewModel(), CoroutineScope {
protected val job = Job()
override val coroutineContext = Dispatchers.Main+job
override fun onCleared() {
super.onCleared()
job.cancel()
}
}
Launching coroutines in a CoroutineScope:
launch or async default dispatcher is now the current scope dispatcher. And we can still choose a different one the same way we did before.
launch {
val foo = withContext(Dispatchers.IO) { … }
// lambda runs within scope's CoroutineContext
…
}
launch(Dispatchers.Default) {
// lambda runs in default threadpool.
…
}
Standalone coroutine launching (outside of any CoroutineScope):
GlobalScope.launch(Dispatchers.Main) {
// lambda runs in main thread.
…
}
We can even define a scope for application with dispatcher Main as default:
object AppScope : CoroutineScope by GlobalScope {
override val coroutineContext = Dispatchers.Main.immediate
}
Notes
Coroutines limit Java interoperability
Confine mutablility to avoid locks
Coroutines are for threading waiting
Avoid I/O in Dispatchers.Default (and Main…)
Dispatchers.IO designed for this
Threads are expensive, so are single-thread contexts
Dispatchers.Default is based on a ForkJoinPool on Android 5+
Coroutines can be used via Channels
Callbacks and locks elimination with channels
Channel definition from JetBrain documentation:
A Channel is conceptually very similar to BlockingQueue. One key difference is that instead of a blocking put operation it has a suspending send (or a non-blocking offer), and instead of a blocking take operation it has a suspending receive.
Actors
Let’s start with a simple tool to use Channels, the Actor.
We already saw it in this blog with the DiffUtil kotlin implementation.
Actor is, yet again, very similar to Handler: we define a coroutine context (so, the tread where to execute actions) and it will execute it in a sequencial order.
Difference is it uses coroutines of course :), we can specify a capacity and executed code can suspend.
An actor will basically forward any order to a coroutine Channel. It will guaranty the order execution and confine operations in its context. It greatly helps to remove synchronize calls and keep all threads free!
protected val updateActor by lazy {
actor<Update>(capacity = Channel.UNLIMITED) {
for (update in channel) when (update) {
Refresh -> updateList()
is Filter -> filter.filter(update.query)
is MediaUpdate -> updateItems(update.mediaList as List<T>)
is MediaAddition -> addMedia(update.media as T)
is MediaListAddition -> addMedia(update.mediaList as List<T>)
is MediaRemoval -> removeMedia(update.media as T)
}
}
}
// usage
fun filter(query: String?) = updateActor.offer(Filter(query))
//or
suspend fun filter(query: String?) = updateActor.send(Filter(query))
In this example, we take advantage of the Kotlin sealed classes feature to select which action to execute.
sealed class Update
object Refresh : Update()
class Filter(val query: String?) : Update()
class MediaAddition(val media: Media) : Update()
And all this actions will be queued, they will never run in parallel. That’s a good way to achieve mutability confinement.
Android lifecycle + Coroutines
Actors can be profitable for Android UI management too, they can ease tasks cancellation and prevent overloading of the main thread.
Let’s implement it and call job.cancel() when activity is destroyed.
class MyActivity : AppCompatActivity(), CoroutineScope {
protected val job = SupervisorJob() // the instance of a Job for this activity
override val coroutineContext = Dispatchers.Main.immediate+job
override fun onDestroy() {
super.onDestroy()
job.cancel() // cancel the job when activity is destroyed
}
}
A SupervisorJob is similar to a regular Job with the only exception that cancellation is propagated only downwards.
So we do not cancel all coroutines in the Activity, when one fails.
A bit better, with an extension function, we can make this CoroutineContext accessible from any View of a CoroutineScope
val View.coroutineContext: CoroutineContext?
get() = (context as? CoroutineScope)?.coroutineContext
We can now combine all this, setOnClick function creates a conflated actor to manage its onClick actions. In case of multiple clicks, intermediates actions will be ignored, preventing any ANR, and these actions will be executed in Activity’s scope. So it will be cancelled when Activity` is destroyed 😎
fun View.setOnClick(action: suspend () -> Unit) {
// launch one actor as a parent of the context job
val scope = (context as? CoroutineScope)?: AppScope
val eventActor = scope.actor<Unit>(capacity = Channel.CONFLATED) {
for (event in channel) action()
}
// install a listener to activate this actor
setOnClickListener { eventActor.offer(Unit) }
}
In this example, we set the Channel as Conflated to ignore events when we have too much of them. You can change it to Channel.UNLIMITED if you prefer to queue events without missing anyone of them, but still protect your app from ANR
We also can combine coroutines and Lifecycle frameworks to automate UI tasks cancellation:
val LifecycleOwner.untilDestroy: Job get() {
val job = Job()
lifecycle.addObserver(object: LifecycleObserver {
@OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() { job.cancel() }
})
return job
}
//usage
GlobalScope.launch(Dispatchers.Main, parent = untilDestroy) {
/* amazing things happen here! */
}
Callbacks mitigation (Part 1)
Example of a callback based API use transformed thank to a Channel.
API works like this:
requestBrowsing(url, listener) triggers the parsing of folder at url address.
The listener receives onMediaAdded(media: Media) for each discovered media in this folder.
listener.onBrowseEnd() is called once folder parsing is done.
Here is the old refresh function in VLC browser provider:
private val refreshList = mutableListOf<Media>()
fun refresh() = requestBrowsing(url, refreshListener)
private val refreshListener = object : EventListener{
override fun onMediaAdded(media: Media) {
refreshList.add(media))
}
override fun onBrowseEnd() {
val list = refreshList.toMutableList()
refreshList.clear()
launch {
dataset.value = list
parseSubDirectories()
}
}
}
How to improve this?
We create a channel, which will be initiated in refresh. Browser callbacks will now only forward media to this channel then close it.
Refresh function is now easier to understand. It sets the channel, calls the VLC browser then fills a list with the media and processes it.
Instead of the select or consumeEach functions, we can use for to wait for media and it will break once browserChannel is closed
private lateinit var browserChannel : Channel<Media>
override fun onMediaAdded(media: Media) {
browserChannel.offer(media)
}
override fun onBrowseEnd() {
browserChannel.close()
}
suspend fun refresh() {
browserChannel = Channel(Channel.UNLIMITED)
val refreshList = mutableListOf<Media>()
requestBrowsing(url)
//Suspends at every iteration to wait for media
for (media in browserChannel) refreshList.add(media)
//Channel has been closed
dataset.value = refreshList
parseSubDirectories()
}
Callbacks mitigation (Part 2): Retrofit
Second approach, we don’t use kotlinx-coroutines at all but the coroutine core framework.
Let’s see how coroutines really work!
retrofitSuspendCall function wraps a Retrofit Call request to make it a suspend function.
With suspendCoroutine we call the Call.enqueue method and suspend the coroutine. The provided callback will call continuation.resume(response) to resume the coroutine with the server response once received.
Then, we just have to bundle our Retrofit functions in retrofitSuspendCall to have a suspending functions returning the requests result.
suspend inline fun <reified T> retrofitSuspendCall(request: () -> Call<T>
) : Response<T> = suspendCoroutine { continuation ->
request.invoke().enqueue(object : Callback<T> {
override fun onResponse(call: Call<T>, response: Response<T>) {
continuation.resume(response)
}
override fun onFailure(call: Call<T>, t: Throwable) {
continuation.resumeWithException(t)
}
})
}
suspend fun browse(path: String?) = retrofitSuspendCall {
ApiClient.browse(path)
}
// usage (within Main coroutine context)
livedata.value = Repo.browse(path)
This way, the network blocking call is done in Retrofit dedicated thread, coroutine is here to wait for the response, and in-app usage couldn’t be simpler!
This implementation is inspired by gildor/kotlin-coroutines-retrofit library, which makes it ready to use.
JakeWharton/retrofit2-kotlin-coroutines-adapter is also available with another implementation, for the same result.
To be continued
Channel framework can be used in many other ways, you can look at BroadcastChannel for more powerful implementations according to your needs.
We can also create channels with the Produce function.
It can also be useful for communication between UI components: an adapter can pass click events to its Fragment/Activity via a Channel or an Actor for example.
Related readings:
Coroutines guide
Guide to UI programming with coroutines
Understanding Android Core: Looper, Handler, and HandlerThread
Presenter as a Function: Reactive MVP for Android Using Kotlin Coroutines
Sample DiffUtil implementation
[Less]
|
Posted
over 6 years
ago
Current Java/Android concurrency framework leads to callback hells and blocking states because we do not have any other simple way to guarantee thread safety.
With coroutines, kotlin brings a very efficient and complete framework to manage
... [More]
concurrency in a more performant and simple way.
Coroutines way
Suspending vs blocking
Basic usage
Dispatch
Coroutine context
Scope
Notes
Callbacks and locks elimination with channels
Actors
Android lifecycle + Coroutines
Callbacks mitigation (Part 1)
Callbacks mitigation (Part 2): Retrofit
To be continued
Suspending vs blocking
Coroutines do not replace threads, it’s more like a framework to manage it.
Its philosophy is to define an execution context which allows to wait for background operations to complete, without blocking the original thread.
The goal here is to avoid callbacks and make concurrency easier.
Basic usage
Very simple first example, we launch a coroutine in the Main context (main thread). In it, we retrieve an image from the IO one, and process it back in Main.
launch(Dispatchers.Main) {
val image = withContext(Dispatchers.IO) { getImage() } // Get from IO context
imageView.setImageBitmap(image) // Back on main thread
}
Staightforward code, like a single threaded function. And while getImage runs in IO dedicated threadpool, the main thread is free for any other job!
withContext function suspends the current coroutine while its action (getImage()) is running. As soon as getImage() returns and main looper is available, coroutine resumes on main thread, and imageView.setImageBitmap(image) is called.
Second example, we now want 2 background works done to use them. We will use the async/await duo to make them run in parallel and use their result in main thread as soon as both are ready:
val job = launch(Dispatchers.Main) {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
job.join() // suspends current coroutine until job is done
async is similar to launch but returns a deferred (which is the Kotlin equivalent of Future), so we can get its result with await(). Called with no parameter, it runs in current scope default context.
And once again, the main thread is free while we are waiting for our 2 values.
As you can see, launch funtion returns a Job that can be used to wait for the operation to be over, with the join() function. It works like in any other language, except that it suspends the coroutine instead of blocking the thread.
Dispatch
Dispatching is a key notion with coroutines, it’s the action to ‘jump’ from a thread to another one.
Let’s look at our current java equivalent of Main dispatching, which is runOnUiThread:
public final void runOnUiThread(Runnable action) {
if (Thread.currentThread() != mUiThread) {
mHandler.post(action); // Dispatch
} else {
action.run(); // Immediate execution
}
}
Android implementation of Main context is a dispatcher based on a Handler. So this really is the matching implementation:
launch(Dispatchers.Main) { ... }
vs
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) { ... }
// Since kotlinx 0.26:
launch(Dispatchers.Main.immediate) { ... }
launch(Dispatchers.Main) posts a Runnable in a Handler, so its code execution is not immediate.
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) will immediately execute its lambda expression in the current thread.
Dispatchers.Main guarantees that coroutine is dispatched on main thread when it resumes, and it uses a Handler as the native Android implementation to post in the application event loop.
Its actual implementation looks like:
val Main: HandlerDispatcher = HandlerContext(mainHandler, "Main")
To get a better understanding of Android dispatching, you can read this blog post on Understanding Android Core: Looper, Handler, and HandlerThread.
Coroutine context
A couroutine context (aka coroutine dispatcher) defines on which thread its code will execute, what to do in case of thrown exception and refers to a parent context, to propagate cancellation.
val job = Job()
val exceptionHandler = CoroutineExceptionHandler {
coroutineContext, throwable -> whatever(throwable)
}
launch(Disaptchers.Default+exceptionHandler+job) { ... }
job.cancel() will cancel all coroutines that have job as a parent. And exceptionHandler will receive all thrown exceptions in these coroutines.
Scope
A coroutineScope makes errors handling easier:
If any child coroutine fails, the entire scope fails and all of children coroutines are cancelled.
In the async example, if the retrieval of a value failed, the other one continued then we would have a broken state to manage.
With a coroutineScope, useValues will be called only if both values retrieval succeeded. Also, if deferred2 fails, deferred1 is cancelled.
coroutineScope {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
We also can “scope” an entire class to define its default CoroutineContext and leverage it.
Example of a class implementing CoroutineScope:
open class ScopedViewModel : ViewModel(), CoroutineScope {
protected val job = Job()
override val coroutineContext = Dispatchers.Main+job
override fun onCleared() {
super.onCleared()
job.cancel()
}
}
Launching coroutines in a CoroutineScope:
launch or async default dispatcher is now the current scope dispatcher. And we can still choose a different one the same way we did before.
launch {
val foo = withContext(Dispatchers.IO) { … }
// lambda runs within scope's CoroutineContext
…
}
launch(Dispatchers.Default) {
// lambda runs in default threadpool.
…
}
Standalone coroutine launching (outside of any CoroutineScope):
GlobalScope.launch(Dispatchers.Main) {
// lambda runs in main thread.
…
}
Notes
Coroutines limit Java interoperability
Confine mutablility to avoid locks
Coroutines are for threading waiting
Avoid I/O in Dispatchers.Default (and Main…)
Dispatchers.IO designed for this
Threads are expensive, so are single-thread contexts
Dispatchers.Default is based on a ForkJoinPool on Android 5+
Coroutines can be used via Channels
Callbacks and locks elimination with channels
Channel definition from JetBrain documentation:
A Channel is conceptually very similar to BlockingQueue. One key difference is that instead of a blocking put operation it has a suspending send (or a non-blocking offer), and instead of a blocking take operation it has a suspending receive.
Actors
Let’s start with a simple tool to use Channels, the Actor.
We already saw it in this blog with the DiffUtil kotlin implementation.
Actor is, yet again, very similar to Handler: we define a coroutine context (so, the tread where to execute actions) and it will execute it in a sequencial order.
Difference is it uses coroutines of course :), we can specify a capacity and executed code can suspend.
An actor will basically forward any order to a coroutine Channel. It will guaranty the order execution and confine operations in its context. It greatly helps to remove synchronize calls and keep all threads free!
protected val updateActor by lazy {
actor<Update>(capacity = Channel.UNLIMITED) {
for (update in channel) when (update) {
Refresh -> updateList()
is Filter -> filter.filter(update.query)
is MediaUpdate -> updateItems(update.mediaList as List<T>)
is MediaAddition -> addMedia(update.media as T)
is MediaListAddition -> addMedia(update.mediaList as List<T>)
is MediaRemoval -> removeMedia(update.media as T)
}
}
}
// usage
fun filter(query: String?) = updateActor.offer(Filter(query))
//or
suspend fun filter(query: String?) = updateActor.send(Filter(query))
In this example, we take advantage of the Kotlin sealed classes feature to select which action to execute.
sealed class Update
object Refresh : Update()
class Filter(val query: String?) : Update()
class MediaAddition(val media: Media) : Update()
And all this actions will be queued, they will never run in parallel. That’s a good way to achieve mutability confinement.
Android lifecycle + Coroutines
Actors can be profitable for Android UI management too, they can ease tasks cancellation and prevent overloading of the main thread.
Let’s implement it and call job.cancel() when activity is destroyed.
class MyActivity : AppCompatActivity(), CoroutineScope {
protected val job = SupervisorJob() // the instance of a Job for this activity
override val coroutineContext = Dispatchers.Main.immediate+job
override fun onDestroy() {
super.onDestroy()
job.cancel() // cancel the job when activity is destroyed
}
}
A SupervisorJob is similar to a regular Job with the only exception that cancellation is propagated only downwards.
So we do not cancel all coroutines in the Activity, when one fails.
A bit better, with an extension function, we can make this CoroutineContext accessible from any View of a CoroutineScope
val View.coroutineContext: CoroutineContext?
get() = (context as? CoroutineScope)?.coroutineContext
We can now combine all this, setOnClick function creates a conflated actor to manage its onClick actions. In case of multiple clicks, intermediates actions will be ignored, preventing any ANR, and these actions will be executed in Activity’s scope. So it will be cancelled when Activity` is destroyed 😎
fun View.setOnClick(action: suspend () -> Unit) {
// launch one actor as a parent of the context job
val eventActor = (context as? CoroutineScope)?.actor<Unit>(
capacity = Channel.CONFLATED) {
for (event in channel) action()
} ?: GlobalScope.actor<Unit>(
Dispatchers.Main,
capacity = Channel.CONFLATED) {
for (event in channel) action()
}
// install a listener to activate this actor
setOnClickListener { eventActor.offer(Unit) }
}
In this example, we set the Channel as Conflated to ignore events when we have too much of them. You can change it to Channel.UNLIMITED if you prefer to queue events without missing anyone of them, but still protect your app from ANR
We also can combine coroutines and Lifecycle frameworks to automate UI tasks cancellation:
val LifecycleOwner.untilDestroy: Job get() {
val job = Job()
lifecycle.addObserver(object: LifecycleObserver {
@OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() { job.cancel() }
})
return job
}
//usage
GlobalScope.launch(Dispatchers.Main, parent = untilDestroy) {
/* amazing things happen here! */
}
Callbacks mitigation (Part 1)
Example of a callback based API use transformed thank to a Channel.
API works like this:
requestBrowsing(url, listener) triggers the parsing of folder at url address.
The listener receives onMediaAdded(media: Media) for each discovered media in this folder.
listener.onBrowseEnd() is called once folder parsing is done.
Here is the old refresh function in VLC browser provider:
private val refreshList = mutableListOf<Media>()
fun refresh() = requestBrowsing(url, refreshListener)
private val refreshListener = object : EventListener{
override fun onMediaAdded(media: Media) {
refreshList.add(media))
}
override fun onBrowseEnd() {
val list = refreshList.toMutableList()
refreshList.clear()
launch {
dataset.value = list
parseSubDirectories()
}
}
}
How to improve this?
We create a channel, which will be initiated in refresh. Browser callbacks will now only forward media to this channel then close it.
Refresh function is now easier to understand. It sets the channel, calls the VLC browser then fills a list with the media and processes it.
Instead of the select or consumeEach functions, we can use for to wait for media and it will break once browserChannel is closed
private lateinit var browserChannel : Channel<Media>
override fun onMediaAdded(media: Media) {
browserChannel.offer(media)
}
override fun onBrowseEnd() {
browserChannel.close()
}
suspend fun refresh() {
browserChannel = Channel(Channel.UNLIMITED)
val refreshList = mutableListOf<Media>()
requestBrowsing(url)
//Suspends at every iteration to wait for media
for (media in browserChannel) refreshList.add(media)
//Channel has been closed
dataset.value = refreshList
parseSubDirectories()
}
Callbacks mitigation (Part 2): Retrofit
Second approach, we don’t use kotlinx-coroutines at all but the coroutine core framework.
Let’s see how coroutines really work!
retrofitSuspendCall function wraps a Retrofit Call request to make it a suspend function.
With suspendCoroutine we call the Call.enqueue method and suspend the coroutine. The provided callback will call continuation.resume(response) to resume the coroutine with the server response once received.
Then, we just have to bundle our Retrofit functions in retrofitSuspendCall to have a suspending functions returning the requests result.
suspend inline fun <reified T> retrofitSuspendCall(request: () -> Call<T>
) : Response<T> = suspendCoroutine { continuation ->
request.invoke().enqueue(object : Callback<T> {
override fun onResponse(call: Call<T>, response: Response<T>) {
continuation.resume(response)
}
override fun onFailure(call: Call<T>, t: Throwable) {
continuation.resumeWithException(t)
}
})
}
suspend fun browse(path: String?) = retrofitSuspendCall {
ApiClient.browse(path)
}
// usage (within Main coroutine context)
livedata.value = Repo.browse(path)
This way, the network blocking call is done in Retrofit dedicated thread, coroutine is here to wait for the response, and in-app usage couldn’t be simpler!
This implementation is inspired by gildor/kotlin-coroutines-retrofit library, which makes it ready to use.
JakeWharton/retrofit2-kotlin-coroutines-adapter is also available with another implementation, for the same result.
To be continued
Channel framework can be used in many other ways, you can look at BroadcastChannel for more powerful implementations according to your needs.
We can also create channels with the Produce function.
It can also be useful for communication between UI components: an adapter can pass click events to its Fragment/Activity via a Channel or an Actor for example.
Related readings:
Coroutines guide
Guide to UI programming with coroutines
Understanding Android Core: Looper, Handler, and HandlerThread
Presenter as a Function: Reactive MVP for Android Using Kotlin Coroutines
[Less]
|
Posted
over 6 years
ago
Current Java/Android concurrency framework leads to callback hells and blocking states because we do not have any other simple way to guarantee thread safety.
With coroutines, kotlin brings a very efficient and complete framework to manage
... [More]
concurrency in a more performant and simple way.
Suspending vs blocking
Coroutines do not replace threads, it’s more like a framework to manage it.
Its philosophy is to define an execution context which allows to wait for background operations to complete, without blocking the original thread.
The goal here is to avoid callbacks and make concurrency easier.
Basic usage
Very simple first example, we launch a coroutine in the Main context (main thread). In it, we retrieve an image from the IO one, and process it back in Main.
launch(Dispatchers.Main) {
val image = withContext(Dispatchers.IO) { getImage() } // Get from IO context
imageView.setImageBitmap(image) // Back on main thread
}
Staightforward code, like a single threaded function. And while getImage runs in IO dedicated threadpool, the main thread is free for any other job!
withContext function suspends the current coroutine while its action (getImage()) is running. As soon as getImage() returns and main looper is available, coroutine resumes on main thread, and imageView.setImageBitmap(image) is called.
Second example, we now want 2 background works done to use them. We will use the async/await duo to make them run in parallel and use their result in main thread as soon as both are ready:
val job = launch(Dispatchers.Main) {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
job.join() // suspends current coroutine until job is done
async is similar to launch but returns a deferred (which is the Kotlin equivalent of Future), so we can get its result with await(). Called with no parameter, it runs in current scope default context.
And once again, the main thread is free while we are waiting for our 2 values.
As you can see, launch funtion returns a Job that can be used to wait for the operation to be over, with the join() function. It works like in any other language, except that it suspends the coroutine instead of blocking the thread.
Dispatch
Dispatching is a key notion with coroutines, it’s the action to ‘jump’ from a thread to another one.
Let’s look at our current java equivalent of Main dispatching, which is runOnUiThread:
public final void runOnUiThread(Runnable action) {
if (Thread.currentThread() != mUiThread) {
mHandler.post(action); // Dispatch
} else {
action.run(); // Immediate execution
}
}
Android implementation of Main context is a dispatcher based on a Handler. So this really is the matching implementation:
launch(Dispatchers.Main) { ... }
vs
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) { ... }
// Since kotlinx 0.26:
launch(Dispatchers.Main.immediate) { ... }
launch(Dispatchers.Main) posts a Runnable in a Handler, so its code execution is not immediate.
launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) will immediately execute its lambda expression in the current thread.
Dispatchers.Main guarantees that coroutine is dispatched on main thread when it resumes, and it uses a Handler as the native Android implementation to post in the application event loop.
Its actual implementation looks like:
val Main: HandlerDispatcher = HandlerContext(mainHandler, "Main")
To get a better understanding of Android dispatching, you can read this blog post on Understanding Android Core: Looper, Handler, and HandlerThread.
Coroutine context
A couroutine context (aka coroutine dispatcher) defines on which thread its code will execute, what to do in case of thrown exception and refers to a parent context, to propagate cancellation.
val job = Job()
val exceptionHandler = CoroutineExceptionHandler {
coroutineContext, throwable -> whatever(throwable)
}
launch(Disaptchers.Default+exceptionHandler+job) { ... }
job.cancel() will cancel all coroutines that have job as a parent. And exceptionHandler will receive all thrown exceptions in these coroutines.
Scope
A coroutineScope makes errors handling easier:
If any child coroutine fails, the entire scope fails and all of children coroutines are cancelled.
In the async example, if the retrieval of a value failed, the other one continued then we would have a broken state to manage.
With a coroutineScope, useValues will be called only if both values retrieval succeeded. Also, if deferred2 fails, deferred1 is cancelled.
coroutineScope {
val deferred1 = async(Dispatchers.Default) { getFirstValue() }
val deferred2 = async(Dispatchers.IO) { getSecondValue() }
useValues(deferred1.await(), deferred2.await())
}
We also can “scope” an entire class to define its default CoroutineContext and leverage it.
Example of a class implementing CoroutineScope:
open class ScopedViewModel : ViewModel(), CoroutineScope {
protected val job = Job()
override val coroutineContext = Dispatchers.Main+job
override fun onCleared() {
super.onCleared()
job.cancel()
}
}
Launching coroutines in a CoroutineScope:
launch or async default dispatcher is now the current scope dispatcher. And we can still choose a different one the same way we did before.
launch {
val foo = withContext(Dispatchers.IO) { … }
// lambda runs within scope's CoroutineContext
…
}
launch(Dispatchers.Default) {
// lambda runs in default threadpool.
…
}
Standalone coroutine launching (outside of any CoroutineScope):
GlobalScope.launch(Dispatchers.Main) {
// lambda runs in main thread.
…
}
We can even define a scope for application with dispatcher Main as default:
object AppScope : CoroutineScope by GlobalScope {
override val coroutineContext = Dispatchers.Main.immediate
}
Notes
Coroutines limit Java interoperability
Confine mutablility to avoid locks
Coroutines are for threading waiting
Avoid I/O in Dispatchers.Default (and Main…)
Dispatchers.IO designed for this
Threads are expensive, so are single-thread contexts
Dispatchers.Default is based on a ForkJoinPool on Android 5+
Coroutines can be used via Channels
Callbacks and locks elimination with channels
Channel definition from JetBrain documentation:
A Channel is conceptually very similar to BlockingQueue. One key difference is that instead of a blocking put operation it has a suspending send (or a non-blocking offer), and instead of a blocking take operation it has a suspending receive.
Actors
Let’s start with a simple tool to use Channels, the Actor.
We already saw it in this blog with the DiffUtil kotlin implementation.
Actor is, yet again, very similar to Handler: we define a coroutine context (so, the tread where to execute actions) and it will execute it in a sequencial order.
Difference is it uses coroutines of course :), we can specify a capacity and executed code can suspend.
An actor will basically forward any order to a coroutine Channel. It will guaranty the order execution and confine operations in its context. It greatly helps to remove synchronize calls and keep all threads free!
protected val updateActor by lazy {
actor<Update>(capacity = Channel.UNLIMITED) {
for (update in channel) when (update) {
Refresh -> updateList()
is Filter -> filter.filter(update.query)
is MediaUpdate -> updateItems(update.mediaList as List<T>)
is MediaAddition -> addMedia(update.media as T)
is MediaListAddition -> addMedia(update.mediaList as List<T>)
is MediaRemoval -> removeMedia(update.media as T)
}
}
}
// usage
fun filter(query: String?) = updateActor.offer(Filter(query))
//or
suspend fun filter(query: String?) = updateActor.send(Filter(query))
In this example, we take advantage of the Kotlin sealed classes feature to select which action to execute.
sealed class Update
object Refresh : Update()
class Filter(val query: String?) : Update()
class MediaAddition(val media: Media) : Update()
And all this actions will be queued, they will never run in parallel. That’s a good way to achieve mutability confinement.
Android lifecycle + Coroutines
Actors can be profitable for Android UI management too, they can ease tasks cancellation and prevent overloading of the main thread.
Let’s implement it and call job.cancel() when activity is destroyed.
class MyActivity : AppCompatActivity(), CoroutineScope {
protected val job = SupervisorJob() // the instance of a Job for this activity
override val coroutineContext = Dispatchers.Main.immediate+job
override fun onDestroy() {
super.onDestroy()
job.cancel() // cancel the job when activity is destroyed
}
}
A SupervisorJob is similar to a regular Job with the only exception that cancellation is propagated only downwards.
So we do not cancel all coroutines in the Activity, when one fails.
A bit better, with an extension function, we can make this CoroutineContext accessible from any View of a CoroutineScope
val View.coroutineContext: CoroutineContext?
get() = (context as? CoroutineScope)?.coroutineContext
We can now combine all this, setOnClick function creates a conflated actor to manage its onClick actions. In case of multiple clicks, intermediates actions will be ignored, preventing any ANR, and these actions will be executed in Activity’s scope. So it will be cancelled when Activity` is destroyed 😎
fun View.setOnClick(action: suspend () -> Unit) {
// launch one actor as a parent of the context job
val scope = (context as? CoroutineScope)?: AppScope
val eventActor = scope.actor<Unit>(capacity = Channel.CONFLATED) {
for (event in channel) action()
}
// install a listener to activate this actor
setOnClickListener { eventActor.offer(Unit) }
}
In this example, we set the Channel as Conflated to ignore events when we have too much of them. You can change it to Channel.UNLIMITED if you prefer to queue events without missing anyone of them, but still protect your app from ANR
We also can combine coroutines and Lifecycle frameworks to automate UI tasks cancellation:
val LifecycleOwner.untilDestroy: Job get() {
val job = Job()
lifecycle.addObserver(object: LifecycleObserver {
@OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() { job.cancel() }
})
return job
}
//usage
GlobalScope.launch(Dispatchers.Main, parent = untilDestroy) {
/* amazing things happen here! */
}
Callbacks mitigation (Part 1)
Example of a callback based API use transformed thank to a Channel.
API works like this:
requestBrowsing(url, listener) triggers the parsing of folder at url address.
The listener receives onMediaAdded(media: Media) for each discovered media in this folder.
listener.onBrowseEnd() is called once folder parsing is done.
Here is the old refresh function in VLC browser provider:
private val refreshList = mutableListOf<Media>()
fun refresh() = requestBrowsing(url, refreshListener)
private val refreshListener = object : EventListener{
override fun onMediaAdded(media: Media) {
refreshList.add(media))
}
override fun onBrowseEnd() {
val list = refreshList.toMutableList()
refreshList.clear()
launch {
dataset.value = list
parseSubDirectories()
}
}
}
How to improve this?
We create a channel, which will be initiated in refresh. Browser callbacks will now only forward media to this channel then close it.
Refresh function is now easier to understand. It sets the channel, calls the VLC browser then fills a list with the media and processes it.
Instead of the select or consumeEach functions, we can use for to wait for media and it will break once browserChannel is closed
private lateinit var browserChannel : Channel<Media>
override fun onMediaAdded(media: Media) {
browserChannel.offer(media)
}
override fun onBrowseEnd() {
browserChannel.close()
}
suspend fun refresh() {
browserChannel = Channel(Channel.UNLIMITED)
val refreshList = mutableListOf<Media>()
requestBrowsing(url)
//Suspends at every iteration to wait for media
for (media in browserChannel) refreshList.add(media)
//Channel has been closed
dataset.value = refreshList
parseSubDirectories()
}
Callbacks mitigation (Part 2): Retrofit
Second approach, we don’t use kotlinx-coroutines at all but the coroutine core framework.
Let’s see how coroutines really work!
retrofitSuspendCall function wraps a Retrofit Call request to make it a suspend function.
With suspendCoroutine we call the Call.enqueue method and suspend the coroutine. The provided callback will call continuation.resume(response) to resume the coroutine with the server response once received.
Then, we just have to bundle our Retrofit functions in retrofitSuspendCall to have a suspending functions returning the requests result.
suspend inline fun <reified T> retrofitSuspendCall(request: () -> Call<T>
) : Response<T> = suspendCoroutine { continuation ->
request.invoke().enqueue(object : Callback<T> {
override fun onResponse(call: Call<T>, response: Response<T>) {
continuation.resume(response)
}
override fun onFailure(call: Call<T>, t: Throwable) {
continuation.resumeWithException(t)
}
})
}
suspend fun browse(path: String?) = retrofitSuspendCall {
ApiClient.browse(path)
}
// usage (within Main coroutine context)
livedata.value = Repo.browse(path)
This way, the network blocking call is done in Retrofit dedicated thread, coroutine is here to wait for the response, and in-app usage couldn’t be simpler!
This implementation is inspired by gildor/kotlin-coroutines-retrofit library, which makes it ready to use.
JakeWharton/retrofit2-kotlin-coroutines-adapter is also available with another implementation, for the same result.
To be continued
Channel framework can be used in many other ways, you can look at BroadcastChannel for more powerful implementations according to your needs.
We can also create channels with the Produce function.
It can also be useful for communication between UI components: an adapter can pass click events to its Fragment/Activity via a Channel or an Actor for example.
Related readings:
Coroutines guide
Guide to UI programming with coroutines
Understanding Android Core: Looper, Handler, and HandlerThread
Presenter as a Function: Reactive MVP for Android Using Kotlin Coroutines
Sample DiffUtil implementation
[Less]
|
Posted
almost 7 years
ago
This is part of an article series covering VLC’s Objective-C framework, which we provide to allow inclusion of all its features in third party applications as well as VLC for iOS and Apple TV.
Previously published:
Part 1: What is VLCKit and how
... [More]
does it work? How to use it?
Part 2: Metadata handling.
Today, we will discuss thumbnailing of video content. We need to differenciate two key aspects: saving still images of a currently playing video (snapshot) and previewing media stored somewhere without being played (thumbnail). While either way, VLCKit will open the resource, decode the bitstream and provide you with a image, performance and usability will differ.
Thumbnailing
Let’s start with thumbnailing a non playing media source, which can be stored locally or remotely.
@implementation DummyObject <VLCMediaThumbnailerDelegate>
- (void)workerMethod
{
// 1
NSURL *url = [NSURL urlWithString:@""];
VLCMedia *media = [VLCMedia mediaWithURL:url];
// 2
VLCMediaThumbnailer *thumbnailer = [VLCMediaThumbnailer thumbnailerWithMedia:media delegate:self];
// 3
CGSize thumbSize = CGSizeMake(800.,600.);
thumbnailer.thumbnailWidth = thumbSize.width;
thumbnailer.thumbnailHeight = thumbSize.height;
// 4
[thumbnailer fetchThumbnail];
}
- (void)mediaThumbnailer:(VLCMediaThumbnailer *)mediaThumbnailer didFinishThumbnail:(CGImageRef)thumbnail
{
// 5
if (thumbnail) {
UIImage *thumbnailImage = [UIImage imageWithCGImage:thumbnail scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
if (thumbnailImage) {
// TODO: do something with the thumbnail!
}
}
}
- (void)mediaThumbnailerDidTimeOut:(VLCMediaThumbnailer *)mediaThumbnailer
{
// TODO: Show a reaction
}
@end
We need to create a NSURL instance along with its VLCMedia representation. Note that the URL may point to both a local or a remote resource.
We create the thumbnailer instance for our media and point to ourselves as a delegate to receive the thumbnail.
We define the size of the resulting thumbnail. If width and height are set to zero, the video’s original size will be used. If you set either width or height to zero, the aspect-ratio is preserved.
Finally, we call the thumbnailer’s worker function.
Asynchronously, after about two to twenty seconds, we will receive a response from the thumbnailer to the delegate. It is important to check the thumbnail for NULL before trying to bridge it to a UIImage or NSImage as well as afterwards as the translation can fail. That’s all.
You might be wondering how the thumbnailer decides which frame to return. This is based on a more complex algorithm currently depending on the media’s duration and availability of key frames. Future versions may also analyze the image content.
You can overwrite this algorithm with the thumbnailer’s snapshotPosition property (with a 0.0 to 1.0 range).
Snapshots
The VLCMediaPlayer class includes a very basic API, which allows the creation of an infinite number of snapshots during playback, which will be asynchronously stored as local files. The size parameters follow the same pattern as for the thumbnailer.
- (void)workerMethod
{
// ...
[_mediaplayer saveVideoSnapshotAt:(NSString *)path withWidth:(int)width andHeight:(int)height];
// ...
}
As soon as the snapshot was stored, a VLCMediaPlayerSnapshotTaken notification is emitted and mediaPlayerSnapshot: is called on the media player’s delegate. Note that the delegate call is available on iOS and tvOS only.
As a convenience starting in VLCKit 3.0 on iOS and tvOS, the media player class exposes the lastSnapshot and snapshots properties, which provide a UIImage instance of the last shot as well as a list of files of the taken shots.
That’s all for today. Enjoy using VLCKit! [Less]
|
Posted
almost 7 years
ago
10 days ago, we published VLC media player 3.0 for all platforms. It’s the first major release in three years and brings a huge number of features, improvements and fixes. Get an overview here and the full changelog there.
For VLCKit, we improved
... [More]
performance and memory management, added new APIs and you get all improvements from the underlying libvlc including full support for decoding H264 and H265 using VideoToolbox in hardware. Instead of using all cores of your iPhones CPU at 100%, decoding a 4K video uses less than 20%.
Further you can look at all aspects of a 360° video with touch gesture based controls, discover and browse shares on your network with UPnP, NFS, FTP, SFTP and SMB and more.
As you remember, we published VLC for Apple TV in January 2016, but so far, we never made VLCKit available on tvOS. In addition to MobileVLCKit for iOS, we now introduce TVVLCKit for tvOS!
For macOS, iOS and tvOS, VLCKit 3.0 is available through Cocoapods as a precompiled binary under the LGPLv2.1 license. You can find the source code on our website – contributions welcome!
We are looking forward to all your feedback and the apps deploying VLCKit to deliver multimedia to their users.
So what did we change in VLCKit, API-wise?
New APIs:
- VLCAudio
- setMuted:
- VLCDialogProvider
- new class to handle user interaction with VLC events
- VLCLibrary
- added properties: debugLogging, debugLoggingLevel
- VLCMediaDiscoverer
- added selector: availableMediaDiscovererForCategoryType:
- added enum: VLCMediaDiscovererCategoryType
- VLCMediaListPlayer
- added selectors:
initWithDrawable:
initWithOptions:andDrawable:
playItemAtNumber:
- VLCMediaPlayer
- added properties:
titleDescriptions
indexOfLongestTitle
numberOfTitles
snapshots
lastSnapshot
- added selectors:
chaptersForTitleIndex:
numberOfChaptersForTitle:
addPlaybackSlave:type:enforce:
updateViewpoint:pitch:roll:fov:absolute:
- added notifications: VLCMediaPlayerTitleChanged, VLCMediaPlayerChapterChanged
- added enum: VLCMediaPlaybackSlaveType
Note:
- play's return type was changed from BOOL to void
- hue is now a float instead of an integer
WARNING:
- Return value of the following methods changed from INT_MAX to -1
(int)currentVideoTrackIndex
(int)currentVideoSubTitleIndex
(int)currentChapterIndex
(int)currentTitleIndex
(int)currentAudioTrackIndex
- VLCMedia
- added keys: VLCMetaInformationTrackTotal, VLCMetaInformationDirector,
VLCMetaInformationSeason, VLCMetaInformationEpisode,
VLCMetaInformationShowName, VLCMetaInformationActors,
VLCMetaInformationAlbumArtist, VLCMetaInformationDiscNumber,
VLCMediaTracksInformationVideoOrientation,
VLCMediaTracksInformationVideoProjection
- added selectors:
codecNameForFourCC:trackType:
mediaType
parseWithOptions:
parseWithOptions:Timeout:
parsedStatus
storeCookie:forHost:path:
clearStoredCookies
- added enums: VLCMediaType, VLCMediaParsingOptions, VLCMediaParsedStatus, VLCMediaOrientation, VLCMediaProjection
- changed behavior: media will no longer be parsed automatically if meta data is requested prior to concluded parsing
- VLCMediaList
- changed behavior: lists of media objects added through arrays or on init are no longer added in reverse order
- VLCTime
- added selectors:
isEqual:
hash
- VLCAudio
- added property: passthrough
Modified APIs:
- VLCMediaList
- To match the KVC bindings, all NSInteger arguments were moved to NSUInteger as appropriate
- mediaList:mediaAdded:atIndex:
- mediaList:mediaRemovedAtIndex:
- addMedia:
- insertMedia:atIndex:
- removeMediaAtIndex:
- mediaAtIndex:
Deprecated APIs:
- VLCAudio
- setMute:
- VLCMedia
- parse, isParsed, synchronousParse
- VLCMediaDiscoverer
- availableMediaDiscoverer, localizedName
- VLCMediaPlayer
- titles, chaptersForTitleIndex:, countOfTitles, framesPerSecond, openVideoSubTitlesFromFile:
- VLCMediaListPlayer
- playItemAtIndex
- VLCStreamSession
- VLCStreamOutput
- VLCMediaLibrary
Removed APIs:
- VLCExtension
- VLCExtensionsManager
- VLCMedia:
- fps
- media:metaValueChangedFrom:forKey:
- VLCMediaPlayer
- audioTracks
- videoTracks
- videoSubTitles
- VLCServicesDiscoverer
- VLCPlaylistDataSource [Less]
|
Posted
almost 7 years
ago
10 days ago, we published VLC media player 3.0 for all platforms. It’s the first major release in three years and brings a huge number of features, improvements and fixes. Get an overview here and the full changelog there.
For VLCKit, we improved
... [More]
performance and memory management, added new APIs and you get all improvements from the underlying libvlc including full support for decoding H264 and H265 using VideoToolbox in hardware. Instead of using all cores of your iPhones CPU at 100%, decoding a 4K video uses less than 20%.
Further you can look at all aspects of a 360° video with touch gesture based controls, discover and browse shares on your network with UPnP, NFS, FTP, SFTP and SMB and more.
As you remember, we published VLC for Apple TV in January 2016, but so far, we never made VLCKit available on tvOS. In addition to MobileVLCKit for iOS, we now introduce TVVLCKit for tvOS!
For macOS, iOS and tvOS, VLCKit 3.0 is available through Cocoapods as a precompiled binary under the LGPLv2.1 license. You can find the source code on our website – contributions welcome!
We are looking forward to all your feedback and the apps deploying VLCKit to deliver multimedia to their users.
Do you want to learn more about integrating VLCKit? Have a look at the tutorials I wrote not too long ago (Part 1, Part 2).
So what did we change in VLCKit, API-wise?
New APIs:
- VLCAudio
- setMuted:
- VLCDialogProvider
- new class to handle user interaction with VLC events
- VLCLibrary
- added properties: debugLogging, debugLoggingLevel
- VLCMediaDiscoverer
- added selector: availableMediaDiscovererForCategoryType:
- added enum: VLCMediaDiscovererCategoryType
- VLCMediaListPlayer
- added selectors:
initWithDrawable:
initWithOptions:andDrawable:
playItemAtNumber:
- VLCMediaPlayer
- added properties:
titleDescriptions
indexOfLongestTitle
numberOfTitles
snapshots
lastSnapshot
- added selectors:
chaptersForTitleIndex:
numberOfChaptersForTitle:
addPlaybackSlave:type:enforce:
updateViewpoint:pitch:roll:fov:absolute:
- added notifications: VLCMediaPlayerTitleChanged, VLCMediaPlayerChapterChanged
- added enum: VLCMediaPlaybackSlaveType
Note:
- play's return type was changed from BOOL to void
- hue is now a float instead of an integer
WARNING:
- Return value of the following methods changed from INT_MAX to -1
(int)currentVideoTrackIndex
(int)currentVideoSubTitleIndex
(int)currentChapterIndex
(int)currentTitleIndex
(int)currentAudioTrackIndex
- VLCMedia
- added keys: VLCMetaInformationTrackTotal, VLCMetaInformationDirector,
VLCMetaInformationSeason, VLCMetaInformationEpisode,
VLCMetaInformationShowName, VLCMetaInformationActors,
VLCMetaInformationAlbumArtist, VLCMetaInformationDiscNumber,
VLCMediaTracksInformationVideoOrientation,
VLCMediaTracksInformationVideoProjection
- added selectors:
codecNameForFourCC:trackType:
mediaType
parseWithOptions:
parseWithOptions:Timeout:
parsedStatus
storeCookie:forHost:path:
clearStoredCookies
- added enums: VLCMediaType, VLCMediaParsingOptions, VLCMediaParsedStatus, VLCMediaOrientation, VLCMediaProjection
- changed behavior: media will no longer be parsed automatically if meta data is requested prior to concluded parsing
- VLCMediaList
- changed behavior: lists of media objects added through arrays or on init are no longer added in reverse order
- VLCTime
- added selectors:
isEqual:
hash
- VLCAudio
- added property: passthrough
Modified APIs:
- VLCMediaList
- To match the KVC bindings, all NSInteger arguments were moved to NSUInteger as appropriate
- mediaList:mediaAdded:atIndex:
- mediaList:mediaRemovedAtIndex:
- addMedia:
- insertMedia:atIndex:
- removeMediaAtIndex:
- mediaAtIndex:
Deprecated APIs:
- VLCAudio
- setMute:
- VLCMedia
- parse, isParsed, synchronousParse
- VLCMediaDiscoverer
- availableMediaDiscoverer, localizedName
- VLCMediaPlayer
- titles, chaptersForTitleIndex:, countOfTitles, framesPerSecond, openVideoSubTitlesFromFile:
- VLCMediaListPlayer
- playItemAtIndex
- VLCStreamSession
- VLCStreamOutput
- VLCMediaLibrary
Removed APIs:
- VLCExtension
- VLCExtensionsManager
- VLCMedia:
- fps
- media:metaValueChangedFrom:forKey:
- VLCMediaPlayer
- audioTracks
- videoTracks
- videoSubTitles
- VLCServicesDiscoverer
- VLCPlaylistDataSource [Less]
|
Posted
almost 7 years
ago
Version 2.5 has been a nice upgrade for VLC on Android. Now it’s been stabilized, and we are finally shipping the long awaited version 3.0!
VLC 3.0 is the first ever synchronized release between desktop application and mobile ports. Today, VLC is
... [More]
released everywhere, with the same version number and at the same time. It will be simpler for everyone, including VideoLAN developers 😊
On Android, this release mainly brings Chromecast feature, but it also repairs some lacks in priors versions. VLC on Android becomes more complete and it will continue!
Incoming features
Chromecast
VLC everywhere
Playlist files
Features Catch-Up
Delete is back
Fast seek
Subs (not) auto-loading
Why chromecast support took so long
Chromecast
Stop vertical videos now! Turn your phone horizontally when you record your children, because you’re going to show them to the family on the big screen now 📺
Chromecast support is finally here. As soon as a Chromecast is detected by VLC, you can send it a video or audio media and enjoy watching it!
If media codecs are supported by your Chromecast device, VLC only acts as a streaming server (which is battery consuming). If not, VLC will transcode and stream media, which is highly cpu and battery consuming.
Please consider Chromecast support beta for now, we will work on hardening it in the upcoming weeks thanks to your feedbacks.
VLC everywhere
VLC for Android is also available on different Android platforms like DeX, Chromebooks and Android auto.
You can now drop media files on VLC from other applications and use the right click on VLC media to get the context menu.
Android Auto allows you to easily command VLC with a simplified UI or even by voice while driving.
You can ask “play Daft Punk (with VLC)” and Google Assistant will recognize whether it’s an artist, an album or a song you’re asking for and tell VLC to play it.
Playlist files
Version 2.5 suffered from a painful regression: The lack of playlist file support. This was due to the migration to our new multiplatform medialibrary. This is now fixed with this update, VLC can scan your .m3u again and show your playlists.
Features Catch-Up
Delete is back
VLC 2.5 had troubles dealing with delete on internal storage on Oreo, this is now resolved and new permissions access is well managed.
But the big news is the support of media deletion on external devices for Android Lollipop+ device. All you have to do is to select the sdcard/USB key in the awful Google dialog and then VLC can delete any file in it!
For this special process, we prepared a small tutorial in the app which looks like this:
Fast seek
‘Fast seek’ VLC option is now activated by default. VLC will now load faster when you change the current position during media playback.
This can be deactivated in Settings → Video → ‘Enable fast seek option’
Subs (not) auto-loading
Not everyone wants subtitles, and we did not focus on this because most of the VideoLAN developers are European. But now you can deactivate automatic subtitles loading in Settings → Subtitles → ‘Auto load subtitles’.
Enjoy your videos without distraction now!
Why chromecast support took so long
Chromecast support is everywhere and VLC took years to get it, right, but there are plenty of good reasons for it:
First of all, VideoLAN is a nonprofit organization and not a company. There are few developers paid for making VLC, most of them do it in their free time. That’s how you get VLC for free and without any ads!
Also, VLC is 100% Open Source and Chromecast SDK isn’t: We had to develop our very own Chromecast stack by ourselves. This is also why there is no voice actions for VLC (except with Android Auto), we cannot use Google Play Services.
Furthermore, Chromecast is not designed to play local video files: When you watch a Youtube video, your phone is just a remote controller, nothing more. Chromecast streams the video from youtube.com.
That’s where it becomes complicated, Chromecast only supports very few codecs number, let’s say h264. Google ensures that your video is encoded in h264 format on youtube.com, so streaming is simple.
With VLC, you have media of any format. So VLC has to be a http server like youtube.com, and provide the video in a Chromecast compatible format. And of course in real time, which is challenging on Android because phones are less powerful than computers.
At last, VLC was not designed to display a video on another screen. It took time to properly redesign VLC to nicely support it. The good news is we did not make a Chromecast specific support, it is generic renderers: in the next months we can add UPnP support for example, to cast on any UPnP box or TV! [Less]
|
Posted
about 7 years
ago
After receiving much requests, the VideoLAN organization finally started accepting Bitcoin donations in February 2014. Since then, we received around 12 BTC. During the past year, the price skyrocketed and today we still hold more than 10 BTC. This
... [More]
unexpected surge of the Bitcoin market price makes it difficult to HODL securely. The volatility adding up to the processing time makes it quite impractical to spend.
The introduction of the Bitcoin whitepaper by Satoshi Nakamoto in 2008 was an amazing breakthrough and is already reshaping the future of payments and banking as a whole but we don’t think BTC (and derivatives) will be the de facto crypto-currency the world will use for daily transactions.
One of the key reason is the transparency of the Bitcoin blockchain. Our donation address being publicly available, the wealth we hold can be easily tracked between addresses and that makes us vulnerable. In the crypto-currency world you are your own bank, therefore you can become a target as soon as you hold enough money. No one else besides you can protect your funds from being stolen using a virtual, or even physical attack. And we don’t want any of our community members to be harmed because of that. We truly think privacy of the blockchain cannot be optional. Encrypting all transaction details: sender, receiver and amount is the most effective way to protect the people in a crypto-currency era.
Also, because of this blockchain transparency, all coins exchanged are not equal. You may think that 1 BTC = any other BTC but let me tell you, this is a wrong assumption. Imagine for a second that you received a payment containing a coin that was previously used (many transactions ago) to trade “stuff” on Silkroad, your coin is now forever tainted and might not be accepted on exchanges or payment processors. This lack of fungibility can make some of your money worth nothing, or worse, you could even be accused of money laundering.
Another major issue we experienced with Bitcoin is the processing time. Anyone dealing with BTC these days can attest that doing a payment is a pain and can easily take hours if not days. Especially if you have a lot of small inputs (like we do). Sending some random amount can costs us more than $100 in fees and can still take days to be processed. We usually end up using a free tx accelerator in order to speed things up. And we’re not alone in this space, few days ago Steam published a blog post saying they are no longer accepting Bitcoin payments for these reasons.
Let’s be honest, it’s the golden age of crypto-currencies: new crypto are launched every day and many of the people investing are betting for huge gains while most don’t care, or are just fooled, about what the technology can effectively bring. But in the end, maybe five years from now, there will only be a handful amount of crypto widely adopted. To be qualified as a possible winner, there are some characteristics to look for:
Must be fully open-source
Trusted development team
Active and sane community
Solid technological grounds
Scalable while maintaining low fees
Private and secure
Fungible
User-friendly (mobile, hardware wallets, …)
Out of pre-mining / ICO / scam
Let me reassure you, we are not yet to a point where we think of launching our own ICO, but if we were about to, we would definitely call it the ConeCoin! ;-)
We believe that there is still no perfect crypto-currency and we’re still quite far from it but we think that one is more in line with our core values and fits most of the critical points raised above: Monero.
Therefore, starting on January 1st, 2018, we will officially support donations in Moneroj (= plural of Monero).
We’re not yet to a point where we’ll stop accepting Bitcoin donations but we think Bitcoin became more a store of value than digital cash and we hope to see more individuals, open-source projects and stores to accept Monero in the future.
Of course, 99% of the donations we receive are still made with fiat money and that’s how we pay for servers, travels, meetings, conferences, hardware, goodies, … and we cannot thank our donators enough for their support!
VLC 3.0 is about to be released and if you too, want to support the VideoLAN project, head to our donation page (soon with Monero though!). [Less]
|