Most of the talks are arranged by projects. Some talks were given multiple times, please choose the most recent version. Enjoy your time of watching them!
Working on communication and synchronization primitives for Kotlin Coroutines, I gave several talks related to the project.
This project has been started with a novel sempahore algorithm for Kotlin Coroutines. After that, we decided to create a flexible abstraction for implementing synchronization and communication primitives. The one is called SegmentQueueSynchronizer and makes the development of such primitives much easier, making them simpler and more efficient. Since we also support abortability of waiting requests (e.g., lock operation can be aborted by timeout) and these algorithm parts are usually the most complicated and error-prone ones, we have proved everything formally via the Iris framework for Coq.
Most programming languages are introducing mechanics of asynchronous programming. Kotlin, for its part, implemented coroutines which use channels to connect with each other. Therefore, high-load applications depend on performance of those channels, which need to be implemented in an effective and scalable way.
In this talk, we’ll discuss which channel algorithms other programming languages and libraries use, how we in Kotlin develop our own solution, which challenges and subtle aspects we encounter in the process and what we managed to achieve.
Unlike traditional concurrent programming with shared mutable state manipulation, coroutines use channels for communication. Essentially, you can send a value into channels from one coroutine and receive this value into another. At the same time, send and receive operations can be synchronized, so the sender waits for the receiver or vice versa. In this talk, we’ll consider the existent channel algorithms, discuss the one developed in Kotlin coroutines, and find out which is better: Kotlin or Go.
This set of talks is about Lincheck tool for testing concurrency on JVM.
Everybody knows that concurrent programming is bug-prone. Moreover, some bugs in complicated algorithms occur rarely and are hard to reproduce; thus, it is difficult to detect them via simple hand-written tests. In this talk, we discuss Lincheck tool for testing and debugging concurrent code. We will talk about both the capabilities and the API of the tool, and implementation details.
Writing concurrent programs is hard. However, testing them is not easy as well. In this talk, I present Lincheck, a tool for testing concurrent algorithms on JVM-based languages. Essentially, it takes some declarations of the data structure operations, generates random scenarios, runs them a lot of times, and verifies the corresponding results. During the talk, we will mostly focus on the verification phase.
At first, we will discuss how LTS (Labeled Transition System) formalism can help us with implementing the verification phase efficiently. After that, I extend LTS to support dual data structures; this formalism is useful for data structures with partial methods (synchronous queues, blocking queues, channels, semaphores, …). However, such an extension is not sufficient, and some small tricks should be added, which restrict the model a bit but make it possible to use it in practice.
Latest: Joker 2017
Writing multithreaded programs is hard, however, testing them is not easier at all. Thinking about all dangerous execution scenarios and writing tests for them is a difficult task, therefore it is often limited to a simple set of stress tests that are not always enough to detect an error. In order to solve this problem, the Lincheck tool has been developed, which automatically checks a multithreaded Java code for linearizability. The first part of the talk is dedicated to various strategies and techniques for checking for linearizability. In the second part, we discuss the provided API and its ability to test your data structures.
These talks are related to either relatively small projects or simply cannot be assigned to any one. However, they are also important for me.
Designing and implementing efficient parallel priority schedulers is an active research area. An intriguing design is the Multi-Queue: given n threads and m ≥ n distinct priority queues, task insertions are performed uniformly at random, while, to delete, a thread picks two queues uniformly at random, and removes the observed task of higher priority. This approach scales well and provides theoretical fairness guarantees but is slower than well-engineered schedulers. In this talk, we present a novel Stealing Multi-Queue (SMQ) algorithm, which leverages the Multi-Queue design, making it more cache-efficient, significantly reducing contention, and getting rid of the locks on the fast path. The resulting SMQ implementation can surpass state-of-the-art schedulers such as OBIM and PMOD in terms of performance on popular graph-processing benchmarks.
Latest: JPoint 2018
This talk is devoted to the transactional memory, which you can find in modern processors more and more often and which is supposed to make the world of concurrency a better place. In this talk we discuss possible ways to use transactional memory for improving performance, the already existing optimizations in OpenJDK based on it, and the possibility to run transactions directly from Java code.
Latest: JBreak 2018
Nowadays, hash tables are probably the most used data structures, the performance of which influences many application components. However, is it that simple to write a fast implementation which uses all the power of multi-core architectures? How efficient are the standard solutions in Java? We will try to get the answer to these and many other questions during this talk. We will touch on both theoretical aspects and some practical approaches for high-performance algorithms development.
Latest: JEEConf 2017
Deadlocks are one of the main problems in multithreaded programming. This talk presents Dl-Check – a novel tool for detecting potential deadlocks at runtime of Java programs. In order to implement this tool, the bytecode instrumentation is required as well as the ASM framework is advantageous. In the first part of the talk, the general approach and the algorithm for online detection of potential deadlocks are presented. As for the second part, byte-code instrumentation and several useful techniques and associated cases related to it are discussed.