… a.k.a. the moment of truth.
At the end of June I finished the first, mostly feature complete version of the ProtobufResourceImpl, which maps, converts and serializes Ecore objects to ProtoBuf. So I started benchmarking its performance in comparison to the existing BinaryResourceImpl. The first results were unexpectedly bad. Therefor I profiled my implementation and removed several bottlenecks until methods from the com.google.protobuf namespace were the most time consuming ones. This optimized version is denoted as ProtobufResourceImpl Dynamic in the results below. It is named Dynamic because it only uses the Ecore and ProtoBuf reflection capabilities to convert the data between the two formats. The performance was still an order of magnitude worse than the BinaryResourceImpl. To improve the performance I decided to generate a ProtoBuf descriptor file (.proto) for my test model, compile it with the ProtoBuf compiler (protoc) and to implement a converter transferring the data between the objects. Afterwards I benchmarked, profiled and optimized again. The results of this static conversion implementation are labeled below as ProtobufResourceImpl Static.
- Intel Core i7-2630QM @2.00 GHz
- 4 GB RAM
- Windows 7 64bit
- Java SE 1.6.0_26
The model used for benchmarking is a simple library model similar to the one used in many EMF tutorials. The root object is an instance of Library. This contains many Books and Authors. Every Book references an Author. For the test I used a model instance with one Library object containing 15,000 Author objects and 50,000 Book objects.
These are the results obtained with the model containing 65,001 objects on my system after 150 warmup runs and 150 measurement runs for each test.
|avg in s||std in s|
|avg in s||std in s|
|size in bytes|
If only containment references are used the serialized size of ProtoResourceImpl and BinaryResourceImpl is the same.
My next step will be to implement a code generator creating the static converters for a model automatically.