SMARDY project logo

SMARDY will develop a research data marketplace for technology transfer built as a software and data carpentry (i.e., developing and teaching workshops on the main data skills required to conduct research), where academia, industry, and government can exchange curated datasets, technology, and tools to foster economic and social development.

From a business perspective, such a marketplace puts together data producers and data consumers to foster the delivery of cross-cutting solutions based on an open innovation model. The incentive for researchers and research organisations is twofold: they get the chance to accept who gets to use their own data and under what terms—an innovation leveraging distributed ledger technologies—and they get paid while sharing their data in a pay-per-access fashion.

From an economic perspective, the marketplace integrates tools for smart semantic discovery of relevant data, while promoting an Open Source culture where researchers do not need to pay for re-executing experiments if redundant. Thus, research funding has a chance to propel the advance of technology on the shoulders of already-existing findings.

Imagine a scientist producing state-of-the-art data-intensive (Big Data) experimental findings with significant global impact. Traditionally, she/he would publish a paper summarising the results. However, the datasets and tools behind this experiment are equally, if not more, important that the sole description of such results and the conclusions reached. One could argue about the importance of the use of an open access repository to share data to increase dataset reusability. However, there is not yet a clear open mechanism to trace how data is further used by others and, possibly, monetise their use.


1. Exploit and build on existing open software systems, frameworks, and standards such as public/private cloud infrastructures and distributed ledger technologies;
2. Develop and evaluate open innovation mechanisms for scientific data specifically with respect to marshal data curation, traceability, and accessibility;
3. Verify that FAIR principles have been properly met for the distributed open big data, that is processed and generated within research contexts and environments;
4. Intelligently control and coordinate data gathering and processing within a myriad of contexts and environments;
5. Demonstrate the applicability of the project technologies with respect to real-world technology transfer environments;
6. Ensure the uptake of project technologies by engaging with relevant developer and user communities (including data scientists and industry) by producing a marketplace and technology roadmap, and by pursuing other innovation and exploitation activities that aim to maximise market potential in the long term, including implementing an open data strategy to disseminate the project results.