Investigate the feasibility of Remote Direct Method Invocations


Cloud computing keeps increasing in popularity. That is, more and more applications are being deployed in third-party datacenters. The lure of cloud computing is to save users the hassle of building up and maintaining their own infrastructure, (over-)provisioned for peak loads which are only achieved very infrequently. In the aim of delivering peak performance to cloud users while maximizing resource usage, datacenter networks have undergone a significant evolution to keep up with the speed increases at the datacenter servers that they interconnect. A major improvement in communication across servers has been achieved through the advent of remote direct memory access (RDMA). RDMA accelerates remote communication by bypassing the main CPU for accessing data on remote servers and employing specialized transport protocols.


The goal of this project is to investigate the feasibility of implementing programming abstraction inspired by remote method invocation (RMI) executing on top of remote direct memory access (RDMA). From a practical perspective, this investigation will consist in actually trying to implement this programming abstraction, starting with a simple version of it. Indeed, programming applications that run in third-party datacenters is still non-trivial, as these applications are typically distributed to allow deployment — and in particular “elastic” re-deployment — across servers. An abstraction that was introduced for simplifying distributed application development is the remote method invocation (RMI). While still widely used due to its ability to hide distribution to a large extent, cloud programmers typically use more specialized and low-level paradigms and APIs these days. One main reason is that RMI is perceived to be “too high level” in that it incurs high overheads for instance due to its underlying automated marshalling/unmarshalling of application-level data into wire formats. RDMA has the potential to significantly reduce such overheads, yet is currently being used directly in an error-prone and tedious manner.