论文标题

FAASM:轻巧的隔离,用于有效的状态无服务器计算

Faasm: Lightweight Isolation for Efficient Stateful Serverless Computing

论文作者

Shillaker, Simon, Pietzuch, Peter

论文摘要

无服务器计算非常适合大数据处理,因为它可以快速,便宜地扩展到数千个并行功能。现有的无服务器平台隔离了短暂的无状态容器中的功能,从而阻止了它们直接共享内存。这迫使用户反复复制和序列化数据,从而增加了不必要的性能和资源成本。我们认为,需要一种新的轻质隔离方法,该方法支持直接在功能之间共享内存并减少资源开销。 我们介绍了Faaslets,这是一种用于高性能无服务器计算的新隔离抽象。 FAASLETS使用WebAssembly提供的软件故障隔离(SFI)隔离执行功能的内存,同时允许在同一地址空间中的功能之间共享内存区域。因此,当在同一台机器上将功能共同定位时,Faaslet可以避免昂贵的数据移动。我们用于Faaslets,Faasm的运行时间隔离了其他资源,例如CPU和网络使用标准Linux Cgroups,并提供一个低级POSIX主机接口,用于网络,文件系统访问和动态加载。为了减少初始化时间,FAASM从已经进行的快照中恢复了Faaslet。我们将FAASM与基于标准容器的平台进行了比较,并表明,在训练机器学习模型时,它可以达到2倍的速度,而内存却减少了10倍。为了提供机器学习推断,FAASM将吞吐量加倍,并使尾部潜伏期减少90%。

Serverless computing is an excellent fit for big data processing because it can scale quickly and cheaply to thousands of parallel functions. Existing serverless platforms isolate functions in ephemeral, stateless containers, preventing them from directly sharing memory. This forces users to duplicate and serialise data repeatedly, adding unnecessary performance and resource costs. We believe that a new lightweight isolation approach is needed, which supports sharing memory directly between functions and reduces resource overheads. We introduce Faaslets, a new isolation abstraction for high-performance serverless computing. Faaslets isolate the memory of executed functions using software-fault isolation (SFI), as provided by WebAssembly, while allowing memory regions to be shared between functions in the same address space. Faaslets can thus avoid expensive data movement when functions are co-located on the same machine. Our runtime for Faaslets, Faasm, isolates other resources, e.g. CPU and network, using standard Linux cgroups, and provides a low-level POSIX host interface for networking, file system access and dynamic loading. To reduce initialisation times, Faasm restores Faaslets from already-initialised snapshots. We compare Faasm to a standard container-based platform and show that, when training a machine learning model, it achieves a 2x speed-up with 10x less memory; for serving machine learning inference, Faasm doubles the throughput and reduces tail latency by 90%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源