Middleware for big data processing: test results
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Bibtex
@article{gankevich2017middleware, title={Middleware for big data processing: test results}, author={I. Gankevich and V. Gaiduchok and V. Korkhov and A. Degtyarev and A. Bogdanov}, publisher={Pleiades Publishing}, journal={Physics of Particles and Nuclei Letters}, year={2017}, month={12}, nrefs={24}, doi={10.1134/S1547477117070068}, issn={1531-8567}, address={Moscow, Russia}, pages={1001--1007}, number={7}, volume={14}, day={01}, type={article} }
Publication: Physics of Particles and Nuclei Letters
Publisher: Pleiades Publishing