Litmus benchmark logo


LITMUS Benchmark Suite, is an open extensible framework for benchmarking a plethora of cross-domain DMSs.
Apart from automating the tedious process of benchmarking, it also offers:
  • An efficient way for replicating existing benchmarks (e.g., BSBM, WAT-DIV).
  • A wide set of performance evaluation measures/indicators tailored specifically for needs and,
  • Present a custom visualization via custom charts, graphs and tabular data of the benchmark results for a faster insight.


Developments in the context of Open, Big, and Linked Data have led to an enormous growth of structured data on the Web. To keep up with the pace of efficient consumption and management of the data at this rate, many Data Management Solutions have been developed.There exists many efforts for benchmarking these domain specific DMSs, however,
  • reproducing these third party benchmarks is an extremely tedious task, and
  • there is a lack of a common framework which enables and advocates the extensibility and reusability of the benchmarks.
LITMUS will go beyond classical storage benchmarking frameworks by allowing for analysing the performance of DMSs across query languages.


The focus of LITMUS is to bridge the gaps in adopting, deploying and scaling the consumption of Linked Data. LITMUS thrives on simplifying the use, assessment and the performance analysis of a wide spectrum of cross-domain DMSs.
In particular, the LITMUS framework will:
  • Enable a common platform for benchmarking and comparing a plethora of cross-domain DMSs, and reproducing existing third-party benchmarks.
  • Create interoperable machine-readable evaluation reports and scientific studies on the correlation of a variety of factors (such as query typology, data structures used for indexing, etc.) with respect to the performance of DMSs.
  • Recommends particular DMSs and benchmarks based on a set of requirements predefined by the user.


Litmus architecture


Scientific Papers

  • Various RDF data representation formats and their conversion complexity, addressing challenges. [C1]
  • Query language expressivity and supported features striving to address the language barrier [C2]. These studies will provide us with deep insights about the functionality of various query languages, RDF data formats, their strengths and limitations.
  • An exhaustive exploratory study on the selection of performance measures for evaluating cross-domain DMSs, addressing challenges. [C3]


RDF to PG Converter

A novel data converter of RDF data to multiple data formats (such as CSV, JSON, SQL, etc.), providing compatible data as input to the cross-domain DMSs.


A novel query translator for the automatic conversion of SPARQL to DMS-specific query language (e.g., Gremlinator, etc), enabling compatible query input for cross-domain DMSs.

Litmus Docker

An open, extensible benchmarking platform, for cross-domain DMS performance evaluation and easy replication of existing benchmarks.



Harsh Thakkar Researcher

Full-time researcher at Informatik III department of University of Bonn, Germany.

Gëzim Sejdiu Researcher

Gëzim Sejdiu is a PhD Student & Research Associate at the University of Bonn. Gëzim’s research interest are in the area of Semantic Web, Big Data and Machine Learning. He is also interested in the area of distributed computing systems (Apache Spark, Apache Flink).

Yashwant Keswani Researcher

Final year undergraduate student at DA-IICT, Gandhinagar.

Contact Us

Harsh Thakkar
A110, Informatik III, University of Bonn
Roemerstrasse 164, 53117 Bonn, Germany
E: hthakkar[at]