Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 05 March 2020

Big data in digital healthcare: lessons learnt and recommendations for general practice

  • Raag Agrawal 1 , 2 &
  • Sudhakaran Prabakaran   ORCID: orcid.org/0000-0002-6527-1085 1 , 3 , 4  

Heredity volume  124 ,  pages 525–534 ( 2020 ) Cite this article

45k Accesses

106 Citations

85 Altmetric

Metrics details

  • Developing world

Big Data will be an integral part of the next generation of technological developments—allowing us to gain new insights from the vast quantities of data being produced by modern life. There is significant potential for the application of Big Data to healthcare, but there are still some impediments to overcome, such as fragmentation, high costs, and questions around data ownership. Envisioning a future role for Big Data within the digital healthcare context means balancing the benefits of improving patient outcomes with the potential pitfalls of increasing physician burnout due to poor implementation leading to added complexity. Oncology, the field where Big Data collection and utilization got a heard start with programs like TCGA and the Cancer Moon Shot, provides an instructive example as we see different perspectives provided by the United States (US), the United Kingdom (UK) and other nations in the implementation of Big Data in patient care with regards to their centralization and regulatory approach to data. By drawing upon global approaches, we propose recommendations for guidelines and regulations of data use in healthcare centering on the creation of a unique global patient ID that can integrate data from a variety of healthcare providers. In addition, we expand upon the topic by discussing potential pitfalls to Big Data such as the lack of diversity in Big Data research, and the security and transparency risks posed by machine learning algorithms.

Similar content being viewed by others

big data in research

Harnessing big data for health equity through a comprehensive public database and data collection framework

big data in research

Axes of a revolution: challenges and promises of big data in healthcare

big data in research

Systematic analysis of healthcare big data analytics for efficient care and disease diagnosing

Introduction.

The advent of Next Generation Sequencing promises to revolutionize medicine as it has become possible to cheaply and reliably sequence entire genomes, transcriptomes, proteomes, metabolomes, etc. (Shendure and Ji 2008 ; Topol 2019a ). “Genomical” data alone is predicted to be in the range of 2–40 Exabytes by 2025—eclipsing the amount of data acquired by all other technological platforms (Stephens et al. 2015 ). In 2018, the price for the research-grade sequencing of the human genome had dropped to under $1000 (Wetterstrand 2019 ). Other “omics” techniques such as Proteomics have also become accessible and cheap, and have added depth to our knowledge of biology (Hasin et al. 2017 ; Madhavan et al. 2018 ). Consumer device development has also led to significant advances in clinical data collection, as it becomes possible to continuously collect patient vitals and analyze them in real-time. In addition to the reductions in cost of sequencing strategies, computational power, and storage have become extremely cheap. All these developments have brought enormous advances in disease diagnosis and treatments, they have also introduced new challenges as large-scale information becomes increasingly difficult to store, analyze, and interpret (Adibuzzaman et al. 2018 ). This problem has given way to a new era of “Big Data” in which scientists across a variety of fields are exploring new ways to understand the large amounts of unstructured and unlinked data generated by modern technologies, and leveraging it to discover new knowledge (Krumholz 2014 ; Fessele 2018 ). Successful scientific applications of Big Data have already been demonstrated in Biology, as initiatives such as the Genotype-Expression Project are producing enormous quantities of data to better understand genetic regulation (Aguet et al. 2017 ). Yet, despite these advances, we see few examples of Big Data being leveraged in healthcare despite the opportunities it presents for creating personalized and effective treatments.

Effective use of Big Data in Healthcare is enabled by the development and deployment of machine learning (ML) approaches. ML approaches are often interchangeably used with artificial intelligence (AI) approaches. ML and AI only now make it possible to unravel the patterns, associations, correlations and causations in complex, unstructured, nonnormalized, and unscaled datasets that the Big Data era brings (Camacho et al. 2018 ). This allows it to provide actionable analysis on datasets as varied as sequences of images (applicable in Radiology) or narratives (patient records) using Natural Language Processing (Deng et al. 2018 ; Esteva et al. 2019 ) and bringing all these datasets together to generate prediction models, such as response of a patient to a treatment regimen. Application of ML tools is also supplemented by the now widespread adoption of Electronic Health Records (EHRs) after the passage of the Affordable Care Act (2010) and Health Information Technology for Economic and Clinical Health Act (2009) in the US, and recent limited adoption in the National Health Service (NHS) (Garber et al. 2014 ). EHRs allow patient data to become more accessible to both patients and a variety of physicians, but also researchers by allowing for remote electronic access and easy data manipulation. Oncology care specifically is instructive as to how Big Data can make a direct impact on patient care. Integrating EHRs and diagnostic tests such as MRIs, genomic sequencing, and other technologies is the big opportunity for Big Data as it will allow physicians to better understand the genetic causes behind cancers, and therefore design more effective treatment regimens while also improving prevention and screening measures (Raghupathi and Raghupathi 2014 ; Norgeot et al. 2019 ). Here, we survey the current challenges in Big Data in healthcare and use oncology as an instructive vignette, highlighting issues of data ownership, sharing, and privacy. Our review builds on findings from the US, UK, and other global healthcare systems to propose a fundamental reorganization of EHRs around unique patient identifiers and ML.

Current successes of Big Data in healthcare

The UK and the US are both global leaders in healthcare that will play important roles in the adoption of Big Data. We see this global leadership already in oncology (The Cancer Genome Atlas (TCGA), Pan-Cancer Analysis of Whole Genomes (PCAWG)) and neuropsychiatric diseases (PsychENCODE) (Tomczak et al. 2015 ; Akbarian et al. 2015 ; Campbell et al. 2020 ). These Big Data generation and open-access models have resulted in hundreds of applications and scientific publications. The success of these initiatives in convincing the scientific and healthcare communities of the advantages of sharing clinical and molecular data have led to major Big Data generation initiatives in a variety of fields across the world such as the “All of Us” project in the US (Denny et al. 2019 ). The UK has now established a clear national strategy that has resulted in the likes of the UK Biobank and 100,000 Genomes projects (Topol 2019b ). These projects dovetail with a national strategy for the implementation of genomic medicine with the opening of multiple genome-sequencing sites, and the introduction of genome sequencing as a standard part of care for the NHS (Marx 2015 ). The US has no such national strategy, and while it has started its own large genomic study—“All of Us”—it does not have any plans for implementation in its own healthcare system (Topol 2019b ). In this review, we have focussed our discussion on developments in Big Data in Oncology as a method to understand this complex and fast moving field, and to develop general guidelines for healthcare at large.

Big Data initiatives in the United Kingdom

The UK Biobank is a prospective cohort initiative that is composed of individuals between the ages of 40 and 69 before disease onset (Allen et al. 2012 ; Elliott et al. 2018 ). The project has collected rich data on 500,000 individuals, collating together biological samples, physical measures of patient health, and sociological information such as lifestyle and demographics (Allen et al. 2012 ). In addition to its size, the UK Biobank offers an unparalleled link to outcomes through integration with the NHS. This unified healthcare system allows researchers to link initial baseline measures with disease outcomes, and with multiple sources of medical information from hospital admission to clinical visits. This allows researchers to be better positioned to minimize error in disease classification and diagnosis. The UK Biobank will also be conducting routine follow-up trials to continue to provide information regarding activity and further expanded biological testing to improve disease and risk factor association.

Beyond the UK Biobank, Public Health England launched the 100,000 Genomes project with the intent to understand the genetic origins behind common cancers (Turnbull et al. 2018 ). The massive effort consists of NHS patients consenting to have their genome sequenced and linked to their health records. Without the significant phenotypic information collected in the UK Biobank—the project holds limited use as a prospective epidemiological study—but as a great tool for researchers interested in identifying disease causing single-nucleotide polymorphisms (SNPs). The size of the dataset itself is its main advance—as it provides the statistical power to discover the associated SNPs even for rare diseases. Furthermore, the 100,000 Genomes Project’s ancillary aim is to stimulate private sector growth in the genomics industry within England.

Big Data initiatives in the United States and abroad

In the United States, the “All of Us” project is expanding upon the UK Biobank model by creating a direct link between patient genome data and their phenotypes by integrating EHRs, behavioral, and family data into a unique patient profile (Denny et al. 2019 ). By creating a standardized and linked database for all patients—“All of Us” will allow researchers greater scope than the UK BioBank to understand cancers and discover the associated genetic causes. In addition, “All of Us” succeeds in focusing on minority populations and health, an area of focus that sets it apart and gives it greater clinical significance. The UK should learn from this effort by expanding the UK Biobank project to further include minority populations and integrate it with ancillary patient data such as from wearables—the current UK Biobank has ~500,000 patients that identify as white versus ~12,000 (i.e., just <2.5%) that identified as non-white (Cohn et al. 2017 ). Meanwhile, individuals of Asian ethnicities made up over 7.5% of the UK population as per the 2011 UK Census, with the proportion of minorities projected to rise in the coming years (O’Brien and Potter-Collins 2015 ; Cohn et al. 2017 ).

Sweden too provides an informative example of the power of investment in rich electronic research registries (Webster 2014 ). The Swedish government has committed over $70 million dollars in funding per annum to expand a variety of cancer registries that would allow researchers insight into risk factors for oncogenesis. In addition, its data sources are particularly valuable for scientists, as each patient’s entries are linked to unique identity numbers that can be cross references with over 90 other registries to give a more complete understanding of a patient’s health and social circumstances. These registries are not limited to disease states and treatments, but also encompass extensive public administrative records that can provide researchers considerable insight into social indicators of health such as income, occupation, and marital status (Connelly et al. 2016 ). These data sources become even more valuable to Swedish researchers as they have been in place for decades with commendable consistency—increasing the power of long-term analysis (Connelly et al. 2016 ). Other nations can learn from the Swedish example by paying particular attention to the use of unique patient identifiers that can map onto a number of datasets collected by government and academia—an idea that was first mentioned in the US Health Insurance Portability and Accountability Act of 1996 (HIPAA) but has not yet been implemented (Davis 2019 ).

China has recently become a leader in implementation and development of new digital technologies, and it has begun to approach healthcare with an emphasis on data standardization and volume. Already, the central government in China has initiated several funding initiatives aimed at pushing Big Data into healthcare use cases, with a particular eye on linking together administrative data, regional claims data from the national health insurance program, and electronic medical records (Zhang et al. 2018 ). China hopes to do this through leveraging its existing personal identification system that covers all Chinese nationals—similar to the Swedish model of maintaining a variety of regional and national registries linked by personal identification numbers. This is particularly relevant to cancer research as China has established a new cancer registry (National Central Cancer Registry of China) that will take advantage of the nation’s population size to give unique insight into otherwise rare oncogenesis. Major concerns regarding this initiative are data quality and time. China has only relatively recently adopted the International Classification of Diseases (ICD) revision ten coding system, a standardized method for recording disease states alongside prescribed treatments. China is also still implementing standardized record keeping terminologies at the regional level. This creates considerable heterogeneity in data quality—as well as inoperability between regions—a major obstacle in any national registry effort (Zhang et al. 2018 ). The recency of these efforts also mean that some time is required until researchers will be able to take advantage of longitudinal analysis—vital for oncology research that aims to spot recurrences or track patient survival. In the future we can expect significant findings to come out of China’s efforts to bring hundreds of millions of patient files available to researchers, but significant advances in standards of care and interoperability must be first surpassed.

The large variety of “Big Data” research projects being undertaken around the world are proposing different approaches to the future of patient records. The UK is broadly leveraging the centralization of the NHS to link genomic data with clinical care records, and opening up the disease endpoints to researchers through a patient ID. Sweden and China are also adopting this model—leveraging unique identity numbers issued to citizens to link otherwise disconnected datasets from administrative and healthcare records (Connelly et al. 2016 ; Cnudde et al. 2016 ; Zhang et al. 2018 ). In this way, tests, technologies and methods will be integrated in a way that is specific to the patient but not necessarily to the hospital or clinic. This allows for significant flexibility in the seamless transfer of information between sites and for physicians to take full advantage of all the data generated. The US’ “All of Us” program is similar in integrating a variety of patient records into a single-patient file that is stored in the cloud (Denny et al. 2019 ). However, it does not significantly link to public administrative data sources, and thus is limited in its usefulness for long-term analysis of the effects of social contributors to cancer progression and risk. This foretells greater problems with the current ecosystem of clinical data—where lack of integration, misguided design, and ambiguous data ownership make research and clinical care more difficult rather than easier.

Survey of problems in clinical data use

Fragmentation.

Fragmentation is the primary problem that needs to be addressed if EHRs have any hope of being used in any serious clinical capacity. Fragmentation arises when EHRs are unable to communicate effectively between each other—effectively locking patient information into a proprietary system. While there are major players in the US EHR space such as Epic and General Electric, there are also dozens of minor and niche companies that also produce their own products—many of which are not able to communicate effectively or easily with one another (DeMartino and Larsen 2013 ). The Clinical Oncology Requirements for the EHR and the National Community Cancer Centers Program have both spoken out about the need for interoperability requirements for EHRs and even published guidelines (Miller 2011 ). In addition, the Certification Commission for Health Information Technology was created to issue guidelines and standards for interoperability of EHRs (Miller 2011 ). Fast Healthcare Interoperability Resources (FHIR) is the current new standard for data exchange for healthcare published by Health Level 7 (HL7). It builds upon past standards from both HL7 and a variety of other standards such as the Reference Information Model. FHIR offers new principles on which data sharing can take place through RESTful APIs—and projects such as Argonaut are working to expand adoption to EHRs (Chambers et al. 2019 ). Even with the introduction of the HL7 Ambulatory Oncology EHR Functional Profile, EHRs have not improved and have actually become pain points for clinicians as they struggle to integrate the diagnostics from separate labs or hospitals, and can even leave physicians in the dark about clinical history if the patient has moved providers (Reisman 2017 ; Blobel 2018 ). Even in integrated care providers such as Kaiser Permanente there are interoperability issues that make EHRs unpopular among clinicians as they struggle to receive outside test results or the narratives of patients who have recently moved (Leonard and Tozzi 2012 ).

The UK provides an informative contrast in its NHS, a single government-run enterprise that provides free healthcare at the point of service. Currently, the NHS is able to successfully integrate a variety of health records—a step ahead of the US—but relies on outdated technology with security vulnerabilities such as fax machines (Macaulay 2016 ). The NHS has recently also begun the process of digitizing its health service, with separate NHS Trusts adopting American EHR solutions, such as the Cambridgeshire NHS trust’s recent agreement with Epic (Honeyman et al. 2016 ). However, the NHS still lags behind the US in broad use and uptake across all of its services (Wallace 2016 ). Furthermore, it will need to force the variety of EHRs being adopted to conform to centralized standards and interoperability requirements that allow services as far afield as genome sequencing to be added to a patient record.

Misguided EHR design

Another issue often identified with the modern incarnation of EHRs is that they are often not helpful for doctors in diagnosis—and have been identified by leading clinicians as a hindrance to patient care (Lenzer 2017 ; Gawande 2018 ). A common denominator among the current generation of EHRs is their focus on billing codes, a set of numbers assigned to every task, service, and drug dispensed by a healthcare professional that is used to determine the level of reimbursement the provider will receive. This focus on billing codes is a necessity of the insurance system in the US, which reimburses providers on a service-rendered basis (Essin 2012 ; Lenzer 2017 ). Due to the need for every part of the care process to be billed to insurers (of which there are many) and sometimes to multiple insurers simultaneously, EHRs in the US are designed foremost with insurance needs in mind. As a result, EHRs are hampered by government regulations around billing codes, the requirements of insurance companies, and only then are able to consider the needs of providers or researchers (Bang and Baik 2019 ). And because purchasing decisions for EHRs are not made by physicians, the priority given to patient care outcomes falls behind other needs. The American Medical Association has cited the difficulty of EHRs as a contributing factor in physician burnout and as a waste of valuable time (Lenzer 2017 ; Gardner et al. 2019 ). The NHS, due to its reliance on American manufacturers of EHRs, must suffer through the same problems despite its fundamentally different structure.

Related to the problem of EHRs being optimized for billing, not patient care, is their lack of development beyond repositories of patient information into diagnostic aids. A study of modern day EHR use in the clinic notes many pain points for physicians and healthcare teams (Assis-Hassid et al. 2019 ). Foremost was the variance in EHR use within the clinic—in part because these programs are often not designed with provider workflows in mind (Assis-Hassid et al. 2019 ). In addition, EHRs were found to distract from interpersonal communication and did not integrate the many different types of data being created by nurses, physician assistants, laboratories, and other providers into usable information for physicians (Assis-Hassid et al. 2019 ).

Data ownership

One of the major challenges of current implementations of Big Data are the lack of regulations, incentives, and systems to manage ownership and responsibilities for data. In the clinical space, in the US, this takes the form of compliance with HIPAA, a now decade-old law that aimed to set rules for patient privacy and control for data (Adibuzzaman et al. 2018 ). As more types of data are generated for patients and uploaded to electronic platforms, HIPAA becomes a major roadblock to data sharing as it creates significant privacy concerns that hamper research. Today, if a researcher is to search for even simple demographic and disease states—they can rapidly identify an otherwise de-identified patient (Adibuzzaman et al. 2018 ). Concerns around breaking HIPAA prevent complete and open data sharing agreements—blocking a path to the specificity needed for the next generation of research from being achieved, and also throws a wrench into clinical application of these technologies as data sharing becomes bogged down by nebulousness surrounding old regulations on patient privacy. Furthermore, compliance with the General Data Protection Regulation (GDPR) in the EU has hampered international collaborations as compliance with both HIPAA and GDPR is not yet standardized (Rabesandratana 2019 ).

Data sharing is further complicated by the need to develop new technologies to integrate across a variety of providers. Taking from the example of the Informatics for Integrating Biology and the Bedside (i2b2) program funded by the NIH with Partners Healthcare, it is difficult and enormously expensive to overlay programs on top of existing EHRs (Adibuzzaman et al. 2018 ). Rather, a new approach needs to be developed to solve the solution of data sharing. Blockchain provides an innovative approach and has been recently explored in the literature as a solution that centers patient control of their data, and also promotes safe and secure data sharing through data transfer transactions secured by encryption (Gordon and Catalini 2018 ). Companies exploring this mechanism for data sharing include Nebula Genomics, a firm founded by George Church, that is aimed at securing genomic data in blockchain in a way that scales commercially, and can be used for research purposes with permission only from data owners—the patients themselves. Other firms are exploring using a variety of data types stored in blockchain to create predictive models of disease—such as Doc.Ai—but all are centrally based on the idea of a blockchain to secure patient data and ensure private accurate transfer between sites (Agbo et al. 2019 ). Advantages of blockchain for healthcare data transfer and storage lie in its security and privacy, but the approach has yet to gain widespread use.

Recommendations for clinical application

Design a new generation of ehrs.

It is conceivable that physicians in the near future will be faced with terabytes of data—patients coming to their clinics with years of continuous data monitoring their heart rate, blood sugar, and a variety of other factors (Topol 2019a ). Gaining clinical insight from such a large quantity of data is an impossible expectation to place upon physicians. In order to solve this problem of the exploding numbers of tests, assays, and results, EHRs will need to be extended from simply being records of patient–physician interactions and digital folders, to being diagnostic aids (Fig. 1 ). Companies such as Roche–Flatiron are already moving towards this model by building predictive and analytical tools into their EHRs when they provide them to providers. However, broader adoption across a variety of providers—and the transparency and portability of the models generated will also be vital. AI-based clinical decision-making support will need to be auditable in order to avoid racial bias, and other potential pitfalls (Char et al. 2018 ). Patients will soon request to have permanent access to the models and predictions being generated by ML models to gain greater clarity into how clinical decisions were made, and to guard against malpractice.

figure 1

In this example we demonstrate how many possible factors may come together to better target patients for early screening measures, which can lower aggregate costs for the healthcare system.

Designing this next generation of EHRs will require collaboration between physicians, patients, providers, and insurers in order to ensure ease of use and efficacy. In terms of specific recommendations for the NHS, the Veterans Administration provides a fruitful approach as it was able to develop its own EHR that compares extremely favorably with the privately produced Epic EHR (Garber et al. 2014 ). Its solution was open access, public-domain, and won the loyalty of physicians in improving patient care (Garber et al. 2014 ). However, the VA’s solution was not actively adopted due to lack of support for continuous maintenance and limited support for billing (Garber et al. 2014 ). While the NHS does not need to consider the insurance industry’s input, it does need to take note that private EHRs were able to gain market prominence in part because they provided a hand to hold for providers, and were far more responsive to personalized concerns raised (Garber et al. 2014 ). Evidence from Denmark suggests that EHR implementation in the UK would benefit from private competitors implementing solutions at the regional rather than national level in order to balance the need for competition and standardization (Kierkegaard 2013 ).

Develop new EHR workflows

Already, researchers and enterprise are developing predictive models that can better diagnose cancers based on imaging data (Bibault et al. 2016 ). While these products and tools are not yet market ready and are far off from clinical approval—they portend things to come. We envision a future where the job of an Oncologist becomes increasingly interpretive rather than diagnostic. But to get to that future, we will need to train our algorithms much like we train our future doctors—with millions of examples. In order to build this corpus of data, we will need to create a digital infrastructure around Big Data that can both handle the demands of researchers and enterprise as they continuously improve their models—with those of patients and physicians who must continue their important work using existing tools and knowledge. In Fig. 2 , we demonstrate a hypothetical workflow based on models provided by other researchers in the field (Bibault et al. 2016 ; Topol 2019a ). This simplified workflow posits EHRs as an integrative tool that can facilitate the capture of a large variety of data sources and can transform them into a standardized format to be stored in a secure cloud storage facility (Osong et al. 2019 ). Current limitations in HIPAA in the US have prevented innovation in this field, so reform will need to both guarantee the protection of private patient data and the open access to patient histories for the next generation of diagnostic tools. The introduction of accurate predictive models for patient treatment will mean that cancer diagnosis will fundamentally change. We will see the job of oncologists transforming itself as they balance recommendations provided by digital tools that can instantly integrate literature and electronic records from past patients, and their own best clinical judgment.

figure 2

Here, various heterogeneous data types are fed into a centralized EHR system that will be uploaded to a secure digital cloud where it can be de-identified and used by research and enterprise, but primarily by physicians and patients.

Use a global patient ID

While we are already seeing the fruits of decades of research into ML methods, there is a whole new set of techniques that will soon be leaving research labs and being applied to the clinic. This set of “omics”—often used to refer to proteomics, genomics, metabolomics, and others—will reveal even more specificity about a patient’s cancer at lower cost (Cho 2015 ). However, they like other technologies, will create petabytes of data that will need to be stored and integrated to help physicians.

As the number of tests and healthcare providers diversify—EHRs will need to address the question of extensibility and flexibility. Providers as disparate as counseling offices and MRI imaging centers cannot be expected to use the same software—or even similar software. As specific solutions for diverse providers are created—they will need to interface in a standard format with existing EHRs. The UK Biobank creates a model for these types of interactions in its use of a singular patient ID to link a variety of data types—allowing for extensibility as future iterations and improvements add data sources for the project. Also, Sweden and China are informative examples in their usage of national citizen identification numbers as a method of linking clinical and administrative datasets together (Cnudde et al. 2016 ; Zhang et al. 2018 ). Singular patient identification numbers do not yet exist in the US despite their inclusion in HIPAA due to subsequent Congressional action preventing their creation (Davis 2019 ). Instead private providers have stepped in to bridge the gap, but have also called on the US government to create an official patient ID system (Davis 2019 ). Not only would a singular patient ID allow for researchers to link US administrative data together with clinical outcomes, but also provide a solution to the questions of data ownership and fragmentation that plague the current system.

Healthcare future will build on the Big Data projects currently being pioneered around the world. The models of data integration being pioneered by the “All of Us” trial and analytics championed by P4 medicine will come to define the patient experience (Flores et al. 2013 ). However, in this piece we have demonstrated a series of hurdles that the field must overcome to avoid imposing additional burdens on physicians and to deliver significant value. We recommend a set of proposals built upon an examination of the NHS and other publicly administered healthcare models and the US multi-payer system to bridge the gap between the market competition needed to develop these new technologies and effective patient care.

Access to patient data must be a paramount guiding principle as regulators begin to approach the problem of wrangling the many streams of data that are already being generated. Data must both be accessible to physicians and patients, but must also be secured and de-identified for the benefit of research. A pathway taken by the UK Biobank to guarantee data integration and universal access has been through the creation of a single database and protocol for accessing its contents (Allen et al. 2012 ). It is then feasible to suggest a similar system for the NHS which is already centralized with a single funding source. However, this system will necessarily also be a security concern due to its centralized nature, even if patient data is encrypted (Fig. 3 ). Another approach is to follow in the footsteps of the US’ HIPAA, which suggested the creation of unique patient IDs over 20 years ago. With a single patient identifier, EHRs would then be allowed to communicate with heterogeneous systems especially designed for labs or imaging centers or counseling services and more (Fig. 4 ). However, this design presupposes a standardized format and protocol for communication across a variety of databases—similar to the HL7 standards that already exist (Bender and Sartipi 2013 ). In place of a centralized authority building out a digital infrastructure to house and communicate patient data, mandating protocols and security standards will allow for the development of specialized EHR solutions for an ever diversifying set of healthcare providers and encourage the market needed for continual development and support of these systems. Avoiding data fragmentation as seen already in the US then becomes an exercise in mandating data sharing in law.

figure 3

Future implementations of Big Data will need to not only integrate data, but also encrypt and de-identify it for secure storage.

figure 4

Hypothetical healthcare system design based on unique patient identifiers that function across a variety of systems and providers—linking together disparate datasets into a complete patient profile.

The next problem then becomes the inevitable application of AI to healthcare. Any such tool created will have to stand up to the scrutiny not just of being asked to outclass human diagnoses, but to also reveal its methods. Because of the opacity of ML models, the “black box” effect means that diagnoses cannot be scrutinized or understood by outside observers (Fig. 5 ). This makes clinical use extremely limited, unless further techniques are developed to deconvolute the decision-making process of these models. Until then, we expect that AI models will only provide support for diagnoses.

figure 5

Without transparency in many of the models being implemented as to why and how decisions are being made, there exists room for algorithmic bias and no room for improvement or criticism by physicians. The “black box” of machine learning obscures why decisions are made and what actually affects predictions.

Furthermore, many times AI models simply replicate biases in existing datasets. Cohn et al. 2017 demonstrated clear areas of deficiency in the minority representation of patients in the UK Biobank. Any research conducted on these datasets will necessarily only be able to create models that generalize to the population in them (a largely homogenous white-British group) (Fig. 6 ). In order to protect against algorithmic bias and the black box of current models hiding their decision-making, regulators must enforce rules that expose the decision-making of future predictive healthcare models to public and physician scrutiny. Similar to the existing FDA regulatory framework for medical devices, algorithms too must be put up to regulatory scrutiny to prevent discrimination, while also ensuring transparency of care.

figure 6

The “All of Us” study will meet this need by specifically aiming to recruit a diverse pool of participants to develop disease models that generalize to every citizen, not just the majority (Denny et al. 2019 ). Future global Big Data generation projects should learn from this example in order to guarantee equality of care for all patients.

The future of healthcare will increasingly live on server racks and be built in glass office buildings by teams of programmers. The US must take seriously the benefits of centralized regulations and protocols that have allowed the NHS to be enormously successful in preventing the problem of data fragmentation—while the NHS must approach the possibility of freer markets for healthcare devices and technologies as a necessary condition for entering the next generation of healthcare delivery which will require constant reinvention and improvement to deliver accurate care.

Overall, we are entering a transition in how we think about caring for patients and the role of a physician. Rather than creating a reactive healthcare system that finds cancers once they have advanced to a serious stage—Big Data offers us the opportunity to fine tune screening and prevention protocols to significantly reduce the burden of diseases such as advanced stage cancers and metastasis. This development allows physicians to think more about a patient individually in their treatment plan as they leverage information beyond rough demographic indicators such as genomic sequencing of their tumor. Healthcare is not yet prepared for this shift, so it is the job of governments around the world to pay attention to how each other have implemented Big Data in healthcare to write the regulatory structure of the future. Ensuring competition, data security, and algorithmic transparency will be the hallmarks of how we think about guaranteeing better patient care.

Adibuzzaman M, DeLaurentis P, Hill J, Benneyworth BD (2018) Big data in healthcare—the promises, challenges and opportunities from a research perspective: a case study with a model database. AMIA Annu Symp Proc 2017:384–392

PubMed   PubMed Central   Google Scholar  

Agbo CC, Mahmoud QH, Eklund JM (2019) Blockchain technology in healthcare: a systematic review. Healthcare 7:56

Article   PubMed Central   Google Scholar  

Aguet F, Brown AA, Castel SE, Davis JR, He Y, Jo B et al. (2017) Genetic effects on gene expression across human tissues. Nature 550:204–213

Article   Google Scholar  

Akbarian S, Liu C, Knowles JA, Vaccarino FM, Farnham PJ, Crawford GE et al. (2015) The PsychENCODE project. Nat Neurosci 18:1707–1712

Article   CAS   PubMed   PubMed Central   Google Scholar  

Allen N, Sudlow C, Downey P, Peakman T, Danesh J, Elliott P et al. (2012) UK Biobank: current status and what it means for epidemiology. Health Policy Technol 1:123–126

Assis-Hassid S, Grosz BJ, Zimlichman E, Rozenblum R, Bates DW (2019) Assessing EHR use during hospital morning rounds: a multi-faceted study. PLoS ONE 14:e0212816

Bang CS, Baik GH (2019) Using big data to see the forest and the trees: endoscopic submucosal dissection of early gastric cancer in Korea. Korean J Intern Med 34:772–774

Article   PubMed   PubMed Central   Google Scholar  

Bender D, Sartipi K (2013) HL7 FHIR: an agile and RESTful approach to healthcare information exchange. In Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, IEEE. pp 326–331

Bibault J-E, Giraud P, Burgun A (2016) Big Data and machine learning in radiation oncology: state of the art and future prospects. Cancer Lett 382:110–117

Article   CAS   PubMed   Google Scholar  

Blobel B (2018) Interoperable EHR systems—challenges, standards and solutions. Eur J Biomed Inf 14:10–19

Google Scholar  

Camacho DM, Collins KM, Powers RK, Costello JC, Collins JJ (2018) Next-generation machine learning for biological networks. Cell 173:1581–1592

Campbell PJ, Getz G, Stuart JM, Korbel JO, Stein LD (2020) Pan-cancer analysis of whole genomes. Nature https://www.nature.com/articles/s41586-020-1969-6

Chambers DA, Amir E, Saleh RR, Rodin D, Keating NL, Osterman TJ, Chen JL (2019) The impact of Big Data research on practice, policy, and cancer care. Am Soc Clin Oncol Educ Book Am Soc Clin Oncol Annu Meet 39:e167–e175

Char DS, Shah NH, Magnus D (2018) Implementing machine learning in health care—addressing ethical challenges. N Engl J Med 378:981–983

Cho WC (2015) Big Data for cancer research. Clin Med Insights Oncol 9:135–136

Cnudde P, Rolfson O, Nemes S, Kärrholm J, Rehnberg C, Rogmark C, Timperley J, Garellick G (2016) Linking Swedish health data registers to establish a research database and a shared decision-making tool in hip replacement. BMC Musculoskelet Disord 17:414

Cohn EG, Hamilton N, Larson EL, Williams JK (2017) Self-reported race and ethnicity of US biobank participants compared to the US Census. J Community Genet 8:229–238

Connelly R, Playford CJ, Gayle V, Dibben C (2016) The role of administrative data in the big data revolution in social science research. Soc Sci Res 59:1–12

Article   PubMed   Google Scholar  

Davis J (2019) National patient identifier HIPAA provision removed in proposed bill. HealthITSecurity https://healthitsecurity.com/news/national-patient-identifier-hipaa-provision-removed-in-proposed-bill

DeMartino JK, Larsen JK (2013) Data needs in oncology: “Making Sense of The Big Data Soup”. J Natl Compr Canc Netw 11:S1–S12

Deng J, El Naqa I, Xing L (2018) Editorial: machine learning with radiation oncology big data. Front Oncol 8:416

Denny JC, Rutter JL, Goldstein DB, Philippakis Anthony, Smoller JW, Jenkins G et al. (2019) The “All of Us” research program. N Engl J Med 381:668–676

Elliott LT, Sharp K, Alfaro-Almagro F, Shi S, Miller KL, Douaud G et al. (2018) Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature 562:210–216

Essin D (2012) Improve EHR systems by rethinking medical billing. Physicians Pract. https://www.physicianspractice.com/ehr/improve-ehr-systems-rethinking-medical-billing

Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K et al. (2019) A guide to deep learning in healthcare. Nat Med 25:24–29

Fessele KL (2018) The rise of Big Data in oncology. Semin Oncol Nurs 34:168–176

Flores M, Glusman G, Brogaard K, Price ND, Hood L (2013) P4 medicine: how systems medicine will transform the healthcare sector and society. Pers Med 10:565–576

Article   CAS   Google Scholar  

Garber S, Gates SM, Keeler EB, Vaiana ME, Mulcahy AW, Lau C et al. (2014) Redirecting innovation in U.S. Health Care: options to decrease spending and increase value: Case Studies 133

Gardner RL, Cooper E, Haskell J, Harris DA, Poplau S, Kroth PJ et al. (2019) Physician stress and burnout: the impact of health information technology. J Am Med Inf Assoc 26:106–114

Gawande A (2018) Why doctors hate their computers. The New Yorker , 12 https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers

Gordon WJ, Catalini C (2018) Blockchain technology for healthcare: facilitating the transition to patient-driven interoperability. Comput Struct Biotechnol J 16:224–230

Hasin Y, Seldin M, Lusis A (2017) Multi-omics approaches to disease. Genome Biol 18:83

Honeyman M, Dunn P, McKenna H (2016) A Digital NHS. An introduction to the digital agenda and plans for implementation https://www.kingsfund.org.uk/sites/default/files/field/field_publication_file/A_digital_NHS_Kings_Fund_Sep_2016.pdf

Kierkegaard P (2013) eHealth in Denmark: A Case Study. J Med Syst 37

Krumholz HM (2014) Big Data and new knowledge in medicine: the thinking, training, and tools needed for a learning health system. Health Aff 33:1163–1170

Lenzer J (2017) Commentary: the real problem is that electronic health records focus too much on billing. BMJ 356:j326

Leonard D, Tozzi J (2012) Why don’t more hospitals use electronic health records. Bloom Bus Week

Macaulay T (2016) Progress towards a paperless NHS. BMJ 355:i4448

Madhavan S, Subramaniam S, Brown TD, Chen JL (2018) Art and challenges of precision medicine: interpreting and integrating genomic data into clinical practice. Am Soc Clin Oncol Educ Book Am Soc Clin Oncol Annu Meet 38:546–553

Marx V (2015) The DNA of a nation. Nature 524:503–505

Miller RS (2011) Electronic health record certification in oncology: role of the certification commission for health information technology. J Oncol Pr 7:209–213

Norgeot B, Glicksberg BS, Butte AJ (2019) A call for deep-learning healthcare. Nat Med 25:14–15

O’Brien R, Potter-Collins A (2015) 2011 Census analysis: ethnicity and religion of the non-UK born population in England and Wales: 2011. Office for National Statistics. https://www.ons.gov.uk/peoplepopulationandcommunity/culturalidentity/ethnicity/articles/2011censusanalysisethnicityandreligionofthenonukbornpopulationinenglandandwales/2015-06-18

Osong AB, Dekker A, van Soest J (2019) Big data for better cancer care. Br J Hosp Med Lond Engl 2005 80:304–305

Rabesandratana T (2019) European data law is impeding studies on diabetes and Alzheimer’s, researchers warn. Sci AAAS. https://doi.org/10.1126/science.aba2926

Raghupathi W, Raghupathi V (2014) Big data analytics in healthcare: promise and potential. Health Inf Sci Syst 2:3

Reisman M (2017) EHRs: the challenge of making electronic data usable and interoperable. Pharm Ther 42:572–575

Shendure J, Ji H (2008) Next-generation DNA sequencing. Nature Biotechnology 26:1135–1145

Stephens ZD, Lee SY, Faghri F, Campbell RH, Zhai C, Efron MJ et al. (2015) Big Data: astronomical or genomical? PLOS Biol 13:e1002195

Tomczak K, Czerwińska P, Wiznerowicz M (2015) The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Contemp Oncol 19:A68–A77

Topol E (2019a) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:44

Topol E (2019b) The topol review: preparing the healthcare workforce to deliver the digital future. Health Education England https://topol.hee.nhs.uk/

Turnbull C, Scott RH, Thomas E, Jones L, Murugaesu N, Pretty FB, Halai D, Baple E, Craig C, Hamblin A, et al. (2018) The 100 000 Genomes Project: bringing whole genome sequencing to the NHS. BMJ 361

Wallace WA (2016) Why the US has overtaken the NHS with its EMR. National Health Executive Magazine, pp 32–34 http://www.nationalhealthexecutive.com/Comment/why-the-us-has-overtaken-the-nhs-with-its-emr

Webster PC (2014) Sweden’s health data goldmine. CMAJ Can Med Assoc J 186:E310

Wetterstrand KA (2019) DNA sequencing costs: data from the NHGRI Genome Sequencing Program (GSP). Natl Hum Genome Res Inst. www.genome.gov/sequencingcostsdata , Accessed 2019

Zhang L, Wang H, Li Q, Zhao M-H, Zhan Q-M (2018) Big data and medical research in China. BMJ 360:j5910

Download references

Author information

Authors and affiliations.

Department of Genetics, University of Cambridge, Downing Site, Cambridge, CB2 3EH, UK

Raag Agrawal & Sudhakaran Prabakaran

Department of Biology, Columbia University, 116th and Broadway, New York, NY, 10027, USA

Raag Agrawal

Department of Biology, Indian Institute of Science Education and Research, Pune, Maharashtra, 411008, India

Sudhakaran Prabakaran

St Edmund’s College, University of Cambridge, Cambridge, CB3 0BN, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sudhakaran Prabakaran .

Ethics declarations

Conflict of interest.

SP is co-founder of Nonexomics.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Associate editor: Frank Hailer

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Agrawal, R., Prabakaran, S. Big data in digital healthcare: lessons learnt and recommendations for general practice. Heredity 124 , 525–534 (2020). https://doi.org/10.1038/s41437-020-0303-2

Download citation

Received : 28 June 2019

Revised : 25 February 2020

Accepted : 25 February 2020

Published : 05 March 2020

Issue Date : April 2020

DOI : https://doi.org/10.1038/s41437-020-0303-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Lightweight federated learning for stis/hiv prediction.

  • Thi Phuoc Van Nguyen
  • Wencheng Yang

Scientific Reports (2024)

An open source knowledge graph ecosystem for the life sciences

  • Tiffany J. Callahan
  • Ignacio J. Tripodi
  • Lawrence E. Hunter

Scientific Data (2024)

Using machine learning approach for screening metastatic biomarkers in colorectal cancer and predictive modeling with experimental validation

  • Amirhossein Ahmadieh-Yazdi
  • Ali Mahdavinezhad
  • Saeid Afshar

Scientific Reports (2023)

Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information

  • Ryuji Hamamoto
  • Takafumi Koyama
  • Noboru Yamamoto

Experimental Hematology & Oncology (2022)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

big data in research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Ethics

Logo of bmcmeth

Ethics review of big data research: What should stay and what should be reformed?

Agata ferretti.

1 Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zürich, Hottingerstrasse 10 (HOA), 8092 Zürich, Switzerland

Marcello Ienca

Mark sheehan.

2 The Ethox Centre, Department of Population Health, University of Oxford, Oxford, UK

Alessandro Blasimme

Edward s. dove.

3 School of Law, University of Edinburgh, Edinburgh, UK

Bobbie Farsides

4 Brighton and Sussex Medical School, Brighton, UK

Phoebe Friesen

5 Biomedical Ethics Unit, Department of Social Studies of Medicine, McGill University, Montreal, Canada

6 Johns Hopkins Berman Institute of Bioethics, Baltimore, USA

Walter Karlen

7 Mobile Health Systems Lab, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland

Peter Kleist

8 Cantonal Ethics Committee Zürich, Zürich, Switzerland

S. Matthew Liao

9 Center for Bioethics, Department of Philosophy, New York University, New York, USA

Camille Nebeker

10 Research Center for Optimal Digital Ethics in Health (ReCODE Health), Herbert Wertheim School of Public Health and Longevity Science, University of California, San Diego, USA

Gabrielle Samuel

11 Department of Global Health and Social Medicine, King’s College London, London, UK

Mahsa Shabani

12 Faculty of Law and Criminology, Ghent University, Ghent, Belgium

Minerva Rivas Velarde

13 Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Geneva, Switzerland

Effy Vayena

Associated data.

Not applicable.

Ethics review is the process of assessing the ethics of research involving humans. The Ethics Review Committee (ERC) is the key oversight mechanism designated to ensure ethics review. Whether or not this governance mechanism is still fit for purpose in the data-driven research context remains a debated issue among research ethics experts.

In this article, we seek to address this issue in a twofold manner. First, we review the strengths and weaknesses of ERCs in ensuring ethical oversight. Second, we map these strengths and weaknesses onto specific challenges raised by big data research. We distinguish two categories of potential weakness. The first category concerns persistent weaknesses, i.e., those which are not specific to big data research, but may be exacerbated by it. The second category concerns novel weaknesses, i.e., those which are created by and inherent to big data projects. Within this second category, we further distinguish between purview weaknesses related to the ERC’s scope (e.g., how big data projects may evade ERC review) and functional weaknesses, related to the ERC’s way of operating. Based on this analysis, we propose reforms aimed at improving the oversight capacity of ERCs in the era of big data science.

Conclusions

We believe the oversight mechanism could benefit from these reforms because they will help to overcome data-intensive research challenges and consequently benefit research at large.

The debate about the adequacy of the Ethics Review Committee (ERC) as the chief oversight body for big data studies is partly rooted in the historical evolution of the ERC. Particularly relevant is the ERC’s changing response to new methods and technologies in scientific research. ERCs—also known as Institutional Review Boards (IRBs) or Research Ethics Committees (RECs)—came to existence in the 1950s and 1960s [ 1 ]. Their original mission was to protect the interests of human research participants, particularly through an assessment of potential harms to them (e.g., physical pain or psychological distress) and benefits that might accrue from the proposed research. ERCs expanded in scope during the 1970s, from participant protection towards ensuring valuable and ethical human subject research (e.g., having researchers implement an informed consent process), as well as supporting researchers in exploring their queries [ 2 ].

Fast forward fifty years, and a lot has changed. Today, biomedical projects leverage unconventional data sources (e.g., social media), partially inscrutable data analytics tools (e.g., machine learning), and unprecedented volumes of data [ 3 – 5 ]. Moreover, the evolution of research practices and new methodologies such as post-hoc data mining have blurred the concept of ‘ human subject’ and elicited a shift towards the concept of data subject —as attested in data protection regulations. [ 6 , 7 ]. With data protection and privacy concerns being in the spotlight of big data research review, language from data protection laws has worked its way into the vocabulary of research ethics. This terminological shift further reveals that big data, together with modern analytic methods used to interpret the data, creates novel dynamics between researchers and participants [ 8 ]. Research data repositories about individuals and aggregates of individuals are considerably expanding in size. Researchers can remotely access and use large volumes of potentially sensitive data without communicating or actively engaging with study participants. Consequently, participants become more vulnerable and subjected to the research itself [ 9 ]. As such, the nature of risk involved in this new form of research changes too. In particular, it moves from the risk of physical or psychological harm towards the risk of informational harm, such as privacy breaches or algorithmic discrimination [ 10 ]. This is the case, for instance, with projects using data collected through web search engines, mobile and smart devices, entertainment websites, and social media platforms. The fact that health-related research is leaving hospital labs and spreading into online space creates novel opportunities for research, but also raises novel challenges for ERCs. For this reason, it is important to re-examine the fit between new data-driven forms of research and existing oversight mechanisms [ 11 ].

The suitability of ERCs in the context of big data research is not merely a theoretical puzzle but also a practical concern resulting from recent developments in data science. In 2014, for example, the so-called ‘emotional contagion study’ received severe criticism for avoiding ethical oversight by an ERC, failing to obtain research consent, violating privacy, inflicting emotional harm, discriminating against data subjects, and placing vulnerable participants (e.g., children and adolescents) at risk [ 12 , 13 ]. In both public and expert opinion [ 14 ], a responsible ERC would have rejected this study because it contravened the research ethics principles of preventing harm (in this case, emotional distress) and adequately informing data subjects. However, the protocol adopted by the researchers was not required to undergo ethics review under US law [ 15 ] for two reasons. First, the data analyzed were considered non-identifiable, and researchers did not engage directly with subjects, exempting the study from ethics review. Second, the study team included both scientists affiliated with a public university (Cornell) and Facebook employees. The affiliation of the researchers is relevant because—in the US and some other countries—privately funded studies are not subject to the same research protections and ethical regulations as publicly funded research [ 16 ]. An additional example is the 2015 case in which the United Kingdom (UK) National Health Service (NHS) shared 1.6 million pieces of identifiable and sensitive data with Google DeepMind. This data transfer from the public to the private party took place legally, without the need for patient consent or ethics review oversight [ 17 ]. These cases demonstrate how researchers can pursue potentially risky big data studies without falling under the ERC’s purview. The limitations of the regulatory framework for research oversight are evident, in both private and public contexts.

The gaps in the ERC’s regulatory process, together with the increased sophistication of research contexts—which now include a variety of actors such as universities, corporations, funding agencies, public institutes, and citizens associations—has led to an increase in the range of oversight bodies. For instance, besides traditional university ethics committees and national oversight committees, funding agencies and national research initiatives have increasingly created internal ethics review boards [ 18 , 19 ]. New participatory models of governance have emerged, largely due to an increase in subjects’ requests to control their own data [ 20 ]. Corporations are creating research ethics committees as well, modelled after the institutional ERC [ 21 ]. In May 2020, for example, Facebook welcomed the first members of its Oversight Board, whose aim is to review the company’s decisions about content moderation [ 22 ]. Whether this increase in oversight models is motivated by the urge to fill the existing regulatory gaps, or whether it is just ‘ethics washing’, is still an open question. However, other types of specialized committees have already found their place alongside ERCs, when research involves international collaboration and data sharing [ 23 ]. Among others, data safety monitoring boards, data access committees, and responsible research and innovation panels serve the purpose of covering research areas left largely unregulated by current oversight [ 24 ].

The data-driven digital transformation challenges the purview and efficacy of ERCs. It also raises fundamental questions concerning the role and scope of ERCs as the oversight body for ethical and methodological soundness in scientific research. 1 Among these questions, this article will explore whether ERCs are still capable of their intended purpose, given the range of novel (maybe not categorically new, but at least different in practice) issues that have emerged in this type of research. To answer this question, we explore some of the challenges that the ERC oversight approach faces in the context of big data research and review the main strengths and weaknesses of this oversight mechanism. Based on this analysis, we will outline possible solutions to address current weaknesses and improve ethics review in the era of big data science.

Strengths of the ethics review via ERC

Historically, ERCs have enabled cross disciplinary exchange and assessment [ 27 ]. ERC members typically come from different backgrounds and bring their perspectives to the debate; when multi-disciplinarity is achieved, the mixture of expertise provides the conditions for a solid assessment of advantages and risks associated with new research. Committees which include members from a variety of backgrounds are also suited to promote projects from a range of fields, and research that cuts across disciplines [ 28 ]. Within these committees, the reviewers’ expertise can be paired with a specific type of content to be reviewed. This one-to-one match can bring timely and, ideally, useful feedback [ 29 ]. In many countries (e.g., European countries, the United States (US), Canada, Australia), ERCs are explicitly mandated by law to review many forms of research involving human participants; moreover, these laws also describe how such a body should be structured and the purview of its review [ 30 , 31 ]. In principle, ERCs also aim to be representative of society and the research enterprise, including members of the public and minorities, as well as researchers and experts [ 32 ]. And in performing a gatekeeping function to the research enterprise, ERCs play an important role: they recognize that both experts and lay people should have a say, with different views to contribute [ 33 ].

Furthermore, the ERC model strives to ensure independent assessment. The fact that ERCs assess projects “from the outside” and maintain a certain degree of objectivity towards what they are reviewing, reduces the risk of overlooking research issues and decreases the risk for conflicts of interest. Moreover, being institutionally distinct—for example, being established by an organization that is distinct from the researcher or the research sponsor—brings added value to the research itself as this lessens the risk for conflict of interest. Conflict of interest is a serious issue in research ethics because it can compromise the judgment of reviewers. Institutionalized review committees might particularly suffer from political interference. This is the case, for example, for universities and health care systems (like the NHS), which tend to engage “in house” experts as ethics boards members. However, ERCs that can prove themselves independent are considered more trustworthy by the general public and data subjects; it is reassuring to know that an independent committee is overseeing research projects [ 34 ].

The ex-ante (or pre-emptive) ethical evaluation of research studies is by many considered the standard procedural approach of ERCs [ 35 ]. Though the literature is divided on the usefulness and added value provided by this form of review [ 36 , 37 ], ex-ante review is commonly used as a mechanism to ensure the ethical validity of a study design before the research is conducted [ 38 , 39 ]. Early research scrutiny aims at risk-mitigation: the ERC evaluates potential research risks and benefits, in order to protect participants’ physical and psychological well-being, dignity, and data privacy. This practice saves researchers’ resources and valuable time by preventing the pursuit of unethical or illegal paths [ 40 ]. Finally, the ex-ante ethical assessment gives researchers an opportunity to receive feedback from ERCs, whose competence and experience may improve the research quality and increase public trust in the research [ 41 ].

All strengths mentioned in this section are strengths of the ERC model in principle. In practice, there are many ERCs that are not appropriately interdisciplinary or representative of the population and minorities, that lack independence from the research being reviewed, and that fail to improve research quality, and may in fact hinder it. We now turn to consider some of these weaknesses in more detail.

Weaknesses of the ethics review via ERC

In order to assess whether ERCs are adequately equipped to oversee big data research, we must consider the weaknesses of this model. We identify two categories of weaknesses which are described in the following section and summarized in Fig.  1 :

  • Persistent weaknesses : those existing in the current oversight system, which could be exacerbated by big data research

Within this second category of novel weaknesses, we further differentiate between:

  • Purview weaknesses : reasons why some big data projects may bypass the ERCs’ purview
  • Functional weaknesses : reasons why some ERCs may be inadequate to assess big data projects specifically

An external file that holds a picture, illustration, etc.
Object name is 12910_2021_616_Fig1_HTML.jpg

Weaknesses of the ERCs

We base the conceptual distinction between persistent and novel weaknesses on the fact that big data research diverges from traditional biomedical research in many respects. As previously mentioned, big data projects are often broad in scope, involve new actors, use unprecedented methodologies to analyze data, and require specific expertise. Furthermore, the peculiarities of big data itself (e.g., being large in volume and from a variety of sources) make data-driven research different in practice from traditional research. However, we should not consider the category of “novel weaknesses” a closed category. We do not argue that weaknesses mentioned here do not, at least partially, overlap with others which already exist. In fact, in almost all cases of ‘novelty’, (i) there is some link back to a concept from traditional research ethics, and (ii) some thought has been given to the issue outside of a big data or biomedical context (e.g., the problem of ERCs’ expertise has arisen in other fields [ 42 ]). We believe that by creating conceptual clarity about novel oversight challenges presented by big data research, we can begin to identify tailored reforms.

Persistent weaknesses

As regulation for research oversight varies between countries, ERCs often suffer from a lack of harmonization. This weakness in the current oversight mechanism is compounded by big data research, which often relies on multi-center international consortia. These consortia in turn depend on approval by multiple oversight bodies demanding different types of scrutiny [ 43 ]. Furthermore, big data research may give rise to collaborations between public bodies, universities, corporations, foundations, and citizen science cooperatives. In this network, each stakeholder has different priorities and depends upon its own rules for regulation of the research process [ 44 – 46 ]. Indeed, this expansion of regulatory bodies and aims does not come with a coordinated effort towards agreed-upon review protocols [ 47 ]. The lack of harmonization is perpetuated by academic journals and funding bodies with diverging views on the ethics of big data. If the review bodies which constitute the “ethics ecosystem” [ 19 ] do not agree to the same ethics review requirements, a big data project deemed acceptable by an ERC in one country may be rejected by another ERC, within or beyond the national borders.

In addition, there is inconsistency in the assessment criteria used within and across committees. Researchers report subjective bias in the evaluation methodology of ERCs, as well as variations in ERC judgements which are not based on morally relevant contextual considerations [ 48 , 49 ]. Some authors have argued that the probability of research acceptance among experts increases if some research peer or same-field expert sits on the evaluation committee [ 50 , 51 ]. The judgement of an ERC can also be influenced by the boundaries of the scientific knowledge of its members. These boundaries can impact the ERC’s approach towards risk taking in unexplored fields of research [ 52 ]. Big data research might worsen this problem since the field is relatively new, with no standardized metric to assess risk within and across countries [ 53 ]. The committees do not necessarily communicate with each other to clarify their specific role in the review process, or try to streamline their approach to the assessment. This results in unclear oversight mandates and inconsistent ethical evaluations [ 27 , 54 ].

Additionally, ERCs may fall short in their efforts to justly redistribute the risks and benefits of research. The current review system is still primarily tilted toward protecting the interests of individual research participants. ERCs do not consistently assess societal benefit, or risks and benefits in light of the overall conduct of research (balancing risks for the individual with collective benefits). Although demands on ERCs vary from country to country [ 55 ], the ERC approach is still generally tailored towards traditional forms of biomedical research, such as clinical trials and longitudinal cohort studies with hospital patients. These studies are usually narrow in scope and carry specific risks only for the participants involved. In contrast, big data projects can impact society more broadly. As an example, computational technologies have shown potential to determine individuals’ sexual orientation by screening facial images [ 56 ]. An inadequate assessment of the common good resulting from this type of study can be socially detrimental [ 57 ]. In this sense, big data projects resemble public health research studies, with an ethical focus on the common good over individual autonomy [ 58 ]. Within this context, ERCs have an even greater responsibility to ensure the just distribution of research benefits across the population. Accurately determining the social value of big data research is challenging, as negative consequences may be difficult to detect before research begins. Nevertheless, this task remains a crucial objective of research oversight.

The literature reports examples of the failure of ERCs to be accountable and transparent [ 59 ]. This might be the result of an already unclear role of ERCs. Indeed, the ERCs practices are an outcome of different levels of legal, ethical, and professional regulations, which largely vary across jurisdictions. Therefore, some ERCs might function as peer counselors, others as independent advisors, and still others as legal controllers. What seems to be common across countries, though, is that ERCs rarely disclose their procedures, policies, and decision-making process. The ERCs’ “secrecy” can result in an absence of trust in the ethical oversight model [ 60 ].This is problematic because ERCs rely on public acceptance as accountable and trustworthy entities [ 61 ]. In big data research, as the number of data subjects is exponentially greater, a lack of accountability and an opaque deliberative process on the part of ERCs might bring even more significant public backlash. Ensuring truthfulness of the stated benefits and risks of research is a major determinant of trust in both science and research oversight. Researchers are another category of stakeholders negatively impacted by poor communication and publicity on the part of the ERC. Commentators have shown that ERCs often do not clearly provide guidance about the ethical standards applied in the research review [ 62 ]. For instance, if researchers provide unrealistic expectations of privacy and security to data subjects, ERCs have an institutional responsibility to flag those promises (e.g., about data security and the secondary-uses of subject data), especially when the research involves personal and high sensitivity data [ 63 ]. For their part, however, ERCs should make their expectations and decision-making processes clear.

Finally, ERCs face the increasing issue of being overwhelmed by the number of studies to review [ 64 , 65 ]. Whereas ERCs originally reviewed only human subjects research happening in natural sciences and medicine, over time they also became the ethical body of reference for those conducting human research in the social sciences (e.g., in behavioral psychology, educational sciences, etc.). This increase in demand creates pressure on ERC members, who often review research pro bono and on a voluntary basis. The wide range of big data research could exacerbate this existing issue. Having more research to assess and less time to accomplish the task may negatively impact the quality of the ERC’s output, as well as increase the time needed for review [ 66 ]. Consequently, researchers might carry out potentially risky studies because the relevant ethical issues of those studies were overlooked. Furthermore, research itself could be significantly delayed, until it loses its timely scientific value.

Novel weaknesses: purview weaknesses

To determine whether the ERC is still the most fit-for-purpose entity to oversee big data research, it is important to establish under which conditions big data projects fall under the purview of ERCs.

Historically, research oversight has primarily focused on human subject research in the biomedical field, using public funding. In the US for instance, each review board is responsible for a subtype of research based on content or methodology (for example there are IRBs dedicated to validating clinical trial protocols, assessing cancer treatments, examining pediatric research, and reviewing qualitative research). This traditional ethics review structure cannot accommodate big data research [ 2 ]. Big data projects often reach beyond a single institution, cut across disciplines, involve data collected from a variety of sources, re-use data not originally collected for research purposes, combine diverse methodologies, orient towards population-level research, rely on large data aggregates, and emerge from collaboration with the private sector. Given this scenario, big data projects may likely fall beyond the purview of ERCs.

Another case in which big data research does not fall under ERC purview is when it relies on anonymized data. If researchers use data that cannot be traced back to subjects (anonymized or non-personal data), then according to both the US Common Rule and HIPAA regulations, the project is considered safe enough to be granted an ethics review waiver. If instead researchers use pseudonymized (or de-identified) data, they must apply for research ethics review, as in principle the key that links the de-identified data with subjects could be revealed or hacked, causing harm to subjects. In the European Union, it would be left to each Member State (and national laws or policies at local institutions) to define whether research using anonymized data should seek ethical review. This case shows once more that current research ethics regulation is relatively loose and disjointed across jurisdictions, and may leave areas where big data research is unregulated. In particular, the special treatment given anonymized data comes from an emphasis on risk at the individual level. So far in the big data discourse, the concept of harm has been mainly linked to vulnerability in data protection. Therefore if privacy laws are respected, and protection is built into the data system, researchers can prevent harmful outcomes [ 40 ]. However, this view is myopic as it does not include other misuses of data aggregates, such as group discrimination and dignitary harm. These types of harm are already emerging in the big data ecosystem, where anonymized data reveal health patterns of a certain sub-group, or computational technologies include strong racial biases [ 67 , 68 ]. Furthermore, studies using anonymized data should not be deemed oversight-free by default, as it is increasingly hard to anonymize data. Technological advancements might soon make it possible to re-identify individuals from aggregate data sets [ 69 ].

The risks associated with big data projects also increase due to the variety of actors involved in research alongside university researchers (e.g., private companies, citizen science associations, bio-citizen groups, community workers cooperatives, foundations, and non-profit organizations) [ 70 , 71 ]. The novel aspect of health-related big data research compared with traditional research is that anyone who can access large amounts of data about individuals and build predictive models based on that data, can now determine and infer the health status of a person without directly engaging with that person in a research program [ 72 ]. Facebook, for example, is carrying out a suicide prediction and prevention project, which relies exclusively on the information that users post on the social network [ 18 ]. Because this type of research is now possible, and the available ethics review model exempts many big data projects from ERC appraisal, gaps in oversight are growing [ 17 , 73 ]. Just as corporations can re-use publicly available datasets (such as social media data) to determine life insurance premiums [ 74 ], citizen science projects can be conducted without seeking research oversight [ 75 ]. Indeed, participant-led big data research (despite being increasingly common) is another area where the traditional overview model is not effective [ 76 ]. In addition, ERCs might consider research conducted outside academia or publicly funded institutions to be not serious. Thus ERCs may disregard review requests from actors outside the academic environment (e.g., by the citizen science or health tech start up) [ 77 ].

Novel weaknesses: functional weaknesses

Functional weaknesses are those related to the skills, composition, and operational activities of ERCs in relation to big data research.

From this functional perspective, we argue that the ex-ante review model might not be appropriate for big data research. Project assessment at the project design phase or at the data collection level is insufficient to address emerging challenges that characterize big data projects – especially as data, over time, could become useful for other purposes, and therefore be re-used or shared [ 53 ]. Limitations of the ex-ante review model have already become apparent in the field of genetic research [ 78 ]. In this context, biobanks must often undergo a second ethics assessment to authorize the specific research use on exome sequencing of their primary data samples [ 79 ]. Similarly, in a case in which an ERC approved the original collection of sensitive personal data, a data access committee would ensure that the secondary uses are in line with original consent and ethics approval. However, if researchers collect data from publicly accessible platforms, they can potentially use and re-use data for research lawfully, without seeking data subject consent or ERC review. This is often the case in social media research. Social media data, which are collected by researchers or private companies using a form of broad consent, can be re-used by researchers to conduct additional analysis without ERC approval. It is not only the re-use of data that poses unforeseeable risks. The ex-ante approach might not be suitable to assess other stages of the data lifecycle [ 80 ], such as deployment machine learning algorithms.

Rather than re-using data, some big data studies build models on existing data (using data mining and machine learning methods), creating new data, which is then used to further feed the algorithms [ 81 ]. Sometimes it is not possible to anticipate which analytic models or tools (e.g., artificial intelligence) will be leveraged in the research. And even then, the nature of computational technologies which extract meaning from big data make it difficult to anticipate all the correlations that will emerge from the analysis [ 37 ]. This is an additional reason that big data research often has a tentative approach to a research question, instead of growing from a specific research hypothesis [ 82 ].The difficulty of clearly framing the big data research itself makes it even harder for ERCs to anticipate unforeseeable risks and potential societal consequences. Given the existing regulations and the intrinsic exploratory nature of big data projects, the mandate of ERCs does not appear well placed to guarantee research oversight. It seems even less so if we consider problems that might arise after the publication of big data studies, such as repurposing or dual-use issues [ 83 ].

ERCs also face the challenge of assessing the value of informed consent for big data projects. To re-obtain consent from research subjects is impractical, particularly when using consumer generated data (e.g., social media data) for research purposes. In these cases, researchers often rely on broad consent and consent waivers. This leaves the data subjects unaware of their participation in specific studies, and therefore makes them incapable of engaging with the research progress. Therefore, the data subjects and the communities they represent become vulnerable towards potential negative research outcomes. The tool of consent has limitations in big data research—it cannot disclose all possible future uses of data, in part because these uses may be unknown at the time of data generation. Moreover, researchers can access existing datasets multiple times and reuse the same data with alternative purposes [ 84 ]. What should be the ERCs’ strategy, given the current model of informed consent leaves an ethical gap in big data projects? ERCs may be tempted to focus on the consent challenge, neglecting other pressing big data issues [ 53 ]. However, the literature reports an increasing number of authors who are against the idea of a new consent form for big data studies [ 5 ].

A final widely discussed concern is the ERC’s inadequate expertise in the area of big data research [ 85 , 86 ]. In the past, there have been questions about the technical and statistical expertise of ERC members. For example, ERCs have attempted to conform social science research to the clinical trial model, using the same knowledge and approach to review both types of research [ 87 ]. However, big data research poses further challenges to ERCs’ expertise. First, the distinct methodology of big data studies (based on data aggregation and mining) requires a specialized technical expertise (e.g., information systems, self-learning algorithms, and anonymization protocols). Indeed, big data projects have a strong technical component, due to data volume and sources, which brings specific challenges (e.g., collecting data outside traditional protocols on social media) [ 88 , 89 ]. Second, ERCs may be unfamiliar with new actors involved in big data research, such as citizen science actors or private corporations. Because of this lack of relevant expertise, ERCs may require unjustified amendments to research studies, or even reject big data projects tout-court [ 36 ]. Finally, ERCs may lose credibility as an oversight body capable of assessing ethical violations and research misconduct. In the past, ERCs solved this challenge by consulting independent experts in a relevant field when reviewing a protocol in that domain. However, this solution is not always practical as it depends upon the availability of an expert. Furthermore, experts may be researchers working and publishing in the field themselves. This scenario would be problematic because researchers would have to define the rules experts must abide by, compromising the concept of independent review [ 19 ]. Nonetheless, this problem does not disqualify the idea of expertise but requires high transparency standards regarding rule development and compliance. Other options include ad-hoc expert committees or provision of relevant training for existing committee members [ 47 , 90 , 91 ]. Given these options, which one is best to address ERCs’ lack of expertise in big data research?

Reforming the ERC

Our analysis shows that ERCs play a critical role in ensuring ethical oversight and risk–benefit evaluation [ 92 ], assessing the scientific validity of a project in its early stages, and offering an independent, critical, and interdisciplinary approach to the review. These strengths demonstrate why the ERC is an oversight model worth holding on to. Nevertheless, ERCs carry persistent big data-specific weaknesses, reducing their effectiveness and appropriateness as oversight bodies for data-driven research. To answer our initial research question, we propose that the current oversight mechanism is not as fit for purpose to assess the ethics of big data research as it could be in principle. ERCs should be improved at several levels to be able to adequately address and overcome these challenges. Changes could be introduced at the level of the regulatory framework as well as procedures. Additionally, reforming the ERC model might mean introducing complementary forms of oversight. In this section we explore these possibilities. Figure  2 offers an overview of the reforms that could aid ERCs in improving their process.

An external file that holds a picture, illustration, etc.
Object name is 12910_2021_616_Fig2_HTML.jpg

Reforms overview for the research oversight mechanism

Regulatory reforms

The regulatory design of research oversight is the first aspect which needs reform. ERCs could benefit from new guidance (e.g., in the form of a flowchart) on the ethics of big data research. This guidance could build upon a deep rethinking of the importance of data for the functioning of societies, the way we use data in society, and our justifications for this use. In the UK, for instance, individuals can generally opt out of having their data (e.g., hospital visit data, health records, prescription drugs) stored by physicians’ offices or by NHS digital services. However, exceptions to this opt-out policy apply when uses of the data are vital to the functioning of society (for example, in the case of official national statistics or overriding public interest, such as the COVID-19 pandemic) [ 93 ].

We imagine this new guidance also re-defining the scope of ERC review, from protection of individual interest to a broader research impact assessment. In other words, it will allow the ERC’s scope to expand and to address purview issues which were previously discussed. For example, less research will be oversight-free because more factors would trigger ERC purview in the first place. The new governance would impose ERC review for research involving anonymized data, or big data research within public–private partnerships. Furthermore, ERC purview could be extended beyond the initial phase of the study to other points in the data lifecycle [ 94 ]. A possible option is to assess a study after its conclusion (as is the case in the pharmaceutical industry): ERCs could then decide if research findings and results should be released and further used by the scientific community. This new ethical guidance would serve ERCs not only in deciding whether a project requires review, but also in learning from past examples and best practices how to best proceed in the assessment. Hence, this guidance could come in handy to increase transparency surrounding assessment criteria used across ERCs. Transparency could be achieved by defining a minimum global standard for ethics assessment that allows international collaboration based on open data and a homogenous evaluation model. Acceptance of a global standard would also mean that the same oversight procedures will apply to research projects with similar risks and research paths, regardless of whether they are carried on by public or private entities. Increased clarification and transparency might also streamline the review process within and across committees, rendering the entire system more efficient.

Procedural reforms

Procedural reforms might target specific aspects of the ERC model to make it more suitable for the review of big data research. To begin with, ERCs should develop new operational tools to mitigate emerging big data challenges. For example, the AI Now algorithmic impact assessment tool, which appraises the ethics of automated decision systems, and informs decisions about whether or not to deploy the systems in society, could be used [ 95 ]. Forms of broad consent [ 96 ] and dynamic consent [ 20 ] can also address some of the issues raised, by using, re-using, and sharing big data (publicly available or not). Nonetheless, informed consent should not be considered a panacea for all ethical issues in big data research—especially in the case of publicly available social media data [ 97 ]. If the ethical implications of big data studies affect the society and its vulnerable sub-groups, individual consent cannot be relied upon as an effective safeguard. For this reason, ERCs should move towards a more democratic process of review. Possible strategies include engaging research subjects and communities in the decision-making process or promoting a co-governance system. The recent Montreal Declaration for Responsible AI is an example of an ethical oversight process developed out of public involvement [ 98 ]. Furthermore, this inclusive approach could increase the trustworthiness of the ethics review mechanism itself [ 99 ]. In practice, the more that ERCs involve potential data subjects in a transparent conversation about the risks of big data research, the more socially accountable the oversight mechanism will become.

ERCs must also address their lack of big data and general computing expertise. There are several potential ways to bridge this gap. First, ERCs could build capacity with formal training on big data. ERCs are willing to learn from researchers about social media data and computational methodologies used for data mining and analysis [ 85 ]. Second, ERCs could adjust membership to include specific experts from needed fields (e.g., computer scientists, biotechnologists, bioinformaticians, data protection experts). Third, ERCs could engage with external experts for specific consultations. Despite some resistance to accepting help, recent empirical research has shown that ERCs may be inclined to rely upon external experts in case of need [ 86 ].

In the data-driven research context, ERCs must embrace their role as regulatory stewards, and walk researchers through the process of ethics review [ 40 ]. ERCs should establish an open communication channel with researchers to communicate the value of research ethics while clarifying the criteria used to assess research. If ERCs and researchers agree to mutually increase transparency, they create an opportunity to learn from past mistakes and prevent future ones [ 100 ]. Universities might seek to educate researchers on ethical issues that can arise when conducting data-driven research. In general, researchers would benefit from training on identifying issues of ethics or completing ethics self-assessment forms, particularly if they are responsible for submitting projects for review [ 101 ]. As biomedical research is trending away from hospitals and clinical trials, and towards people’s homes and private corporations, researchers should strive towards greater clarity, transparency, and responsibility. Researchers should disclose both envisioned risks and benefits, as well as the anticipated impact at the individual and population level [ 54 ]. ERCs can then more effectively assess the impact of big data research and determine whether the common good is guaranteed. Furthermore, they might examine how research benefits are distributed throughout society. Localized decision making can play a role here [ 55 ]. ERCs may take into account characteristics specific to the social context, to evaluate whether or not the research respects societal values.

Complementary reforms

An additional measure to tackle the novelty of big data research might consist in reforming the current research ethics system through regulatory and procedural tools. However, this strategy may not be sufficient: the current system might require additional support from other forms of oversight to complement its work.

One possibility is the creation of hybrid review mechanisms and norms, merging valuable aspects of the traditional ERC review model with more innovative models, which have been adopted by various partners involved in the research (e.g., corporations, participants, communities) [ 102 ]. This integrated mechanism of oversight would cover all stages of big data research and involve all relevant stakeholders [ 103 ]. Journals and the publishing industry could play a role within this hybrid ecosystem in limiting potential dual use concerns. For instance, in the research publication phase, resources could be assigned to editors so as to assess research integrity standards and promote only those projects which are ethically aligned. However, these implementations can have an impact only when there is a shared understanding of best practice within the oversight ecosystem [ 19 ].

A further option is to include specialized and distinct ethical committees alongside ERCs, whose purpose is to assess big data research and provide sectorial accreditation to researchers. In this model, ERCs would not be overwhelmed by the numbers of study proposals to review and could outsource evaluations requiring specialist knowledge in the field of big data. It is true that specialized committees (data safety monitoring boards, data access committees, and responsible research and innovation panels) already exist and support big data researchers in ensuring data protection (e.g., system security, data storage, data transfer). However, something like a “data review board” could assess research implications both for the individual and society, while reviewing a project’s technical features. Peer review could play a critical role in this model: the research community retains the expertise needed to conduct ethical research and to support each other when the path is unclear [ 101 ].

Despite their promise, these scenarios all suffer from at least one primary limitation. The former might face a backlash when attempting to bring together the priorities and ethical values of various stakeholders, within common research norms. Furthermore, while decentralized oversight approaches might bring creativity over how to tackle hard problems, they may also be very dispersive and inefficient. The latter could suffer from overlapping scope across committees, resulting in confusing procedures, and multiplying efforts while diluting liability. For example, research oversight committees have multiplied within the United States, leading to redundancy and disharmony across committees [ 47 ]. Moreover, specialized big data ethics committees working in parallel with current ERCs could lead to questions over the role of the traditional ERC, when an increasing number of studies will be big data studies.

ERCs face several challenges in the context of big data research. In this article, we sought to bring clarity regarding those which might affect the ERC’s practice, distinguishing between novel and persistent weaknesses which are compounded by big data research. While these flaws are profound and inherent in the current sociotechnical transformation, we argue that the current oversight model is still partially capable of guaranteeing the ethical assessment of research. However, we also advance the notion that introducing reform at several levels of the oversight mechanism could benefit and improve the ERC system itself. Among these reforms, we identify the urgency for new ethical guidelines and new ethical assessment tools to safeguard society from novel risks brought by big data research. Moreover, we recommend that ERCs adapt their membership to include necessary expertise for addressing the research needs of the future. Additionally, ERCs should accept external experts’ consultations and consider training in big data technical features as well as big data ethics. A further reform concerns the need for transparent engagement among stakeholders. Therefore, we recommend that ERCs involve both researchers and data subjects in the assessment of big data research. Finally, we acknowledge the existing space for a coordinated and complementary support action from other forms of oversight. However, the actors involved must share a common understanding of best practice and assessment criteria in order to efficiently complement the existing oversight mechanism. We believe that these adaptive suggestions could render the ERC mechanism sufficiently agile and well-equipped to overcome data-intensive research challenges and benefit research at large.

Acknowledgements

This article reports the ideas and the conclusions emerged during a collaborative and participatory online workshop. All authors participated in the “Big Data Challenges for Ethics Review Committees” workshop, held online the 23-24 April 2020 and organized by the Health Ethics and Policy Lab, ETH Zurich.

Abbreviations

ERC(s)Ethics Review Committee(s)
HIPAAHealth Insurance Portability and Accountability Act
IRB(s)Institutional Review Board(s)
NHSNational Health Service
REC(s)Research Ethics Committee(s)
UKUnited Kingdom
USUnited States

Authors' contributions

AF drafted the manuscript, MI, MS1 and EV contributed substantially to the writing. EV is the senior lead on the project from which this article derives. All the authors (AF, MI, MS1, AB, ESD, BF, PF, JK, WK, PK, SML, CN, GS, MS2, MRV, EV) contributed greatly to the intellectual content of this article, edited it, and approved the final version. All authors read and approved the final manuscript.

This research is supported by the Swiss National Science Foundation under award 407540_167223 (NRP 75 Big Data). MS1 is grateful for funding from the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC). The funding bodies did not take part in designing this research and writing the manuscript.

Availability of data and materials

Declarations.

The authors declare that they have no competing interests.

1 There is an unsettled discussion about whether ERCs ought to play a role in evaluating both scientific and ethical aspects of research, or whether these can even come apart—but we will not go into detail here. 25.Dawson AJ, Yentis SM. Contesting the science/ethics distinction in the review of clinical research. Journal of Medical Ethics. 2007;33(3):165–7, 26.Angell EL, Bryman A, Ashcroft RE, Dixon-Woods M. An analysis of decision letters by research ethics committees: the ethics/scientific quality boundary examined. BMJ Quality & Safety. 2008;17(2):131–6.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Research and Big Data

Big Data promises to revolutionise the production of knowledge within and beyond science, by enabling novel, highly efficient ways to plan, conduct, disseminate and assess research. The last few decades have witnessed the creation of novel ways to produce, store, and analyse data, culminating in the emergence of the field of data science , which brings together computational, algorithmic, statistical and mathematical techniques towards extrapolating knowledge from big data. At the same time, the Open Data movement—emerging from policy trends such as the push for Open Government and Open Science—has encouraged the sharing and interlinking of heterogeneous research data via large digital infrastructures. The availability of vast amounts of data in machine-readable formats provides an incentive to create efficient procedures to collect, organise, visualise and model these data. These infrastructures, in turn, serve as platforms for the development of artificial intelligence, with an eye to increasing the reliability, speed and transparency of processes of knowledge creation. Researchers across all disciplines see the newfound ability to link and cross-reference data from diverse sources as improving the accuracy and predictive power of scientific findings and helping to identify future directions of inquiry, thus ultimately providing a novel starting point for empirical investigation. As exemplified by the rise of dedicated funding, training programmes and publication venues, big data are widely viewed as ushering in a new way of performing research and challenging existing understandings of what counts as scientific knowledge.

This entry explores these claims in relation to the use of big data within scientific research, and with an emphasis on the philosophical issues emerging from such use. To this aim, the entry discusses how the emergence of big data—and related technologies, institutions and norms—informs the analysis of the following themes:

  • how statistics, formal and computational models help to extrapolate patterns from data, and with which consequences;
  • the role of critical scrutiny (human intelligence) in machine learning, and its relation to the intelligibility of research processes;
  • the nature of data as research components;
  • the relation between data and evidence, and the role of data as source of empirical insight;
  • the view of knowledge as theory-centric;
  • understandings of the relation between prediction and causality;
  • the separation of fact and value; and
  • the risks and ethics of data science.

These are areas where attention to research practices revolving around big data can benefit philosophy, and particularly work in the epistemology and methodology of science. This entry doesn’t cover the vast scholarship in the history and social studies of science that has emerged in recent years on this topic, though references to some of that literature can be found when conceptually relevant. Complementing historical and social scientific work in data studies, the philosophical analysis of data practices can also elicit significant challenges to the hype surrounding data science and foster a critical understanding of the role of data-fuelled artificial intelligence in research.

1. What Are Big Data?

2. extrapolating data patterns: the role of statistics and software, 3. human and artificial intelligence, 4. the nature of (big) data, 5. big data and evidence, 6. big data, knowledge and inquiry, 7. big data between causation and prediction, 8. the fact/value distinction, 9. big data risks and the ethics of data science, 10. conclusion: big data and good science, other internet resources, related entries.

We are witnessing a progressive “datafication” of social life. Human activities and interactions with the environment are being monitored and recorded with increasing effectiveness, generating an enormous digital footprint. The resulting “big data” are a treasure trove for research, with ever more sophisticated computational tools being developed to extract knowledge from such data. One example is the use of various different types of data acquired from cancer patients, including genomic sequences, physiological measurements and individual responses to treatment, to improve diagnosis and treatment. Another example is the integration of data on traffic flow, environmental and geographical conditions, and human behaviour to produce safety measures for driverless vehicles, so that when confronted with unforeseen events (such as a child suddenly darting into the street on a very cold day), the data can be promptly analysed to identify and generate an appropriate response (the car swerving enough to avoid the child while also minimising the risk of skidding on ice and damaging to other vehicles). Yet another instance is the understanding of the nutritional status and needs of a particular population that can be extracted from combining data on food consumption generated by commercial services (e.g., supermarkets, social media and restaurants) with data coming from public health and social services, such as blood test results and hospital intakes linked to malnutrition. In each of these cases, the availability of data and related analytic tools is creating novel opportunities for research and for the development of new forms of inquiry, which are widely perceived as having a transformative effect on science as a whole.

A useful starting point in reflecting on the significance of such cases for a philosophical understanding of research is to consider what the term “big data” actually refers to within contemporary scientific discourse. There are multiple ways to define big data (Kitchin 2014, Kitchin & McArdle 2016). Perhaps the most straightforward characterisation is as large datasets that are produced in a digital form and can be analysed through computational tools. Hence the two features most commonly associated with Big Data are volume and velocity. Volume refers to the size of the files used to archive and spread data. Velocity refers to the pressing speed with which data is generated and processed. The body of digital data created by research is growing at breakneck pace and in ways that are arguably impossible for the human cognitive system to grasp and thus require some form of automated analysis.

Volume and velocity are also, however, the most disputed features of big data. What may be perceived as “large volume” or “high velocity” depends on rapidly evolving technologies to generate, store, disseminate and visualise the data. This is exemplified by the high-throughput production, storage and dissemination of genomic sequencing and gene expression data, where both data volume and velocity have dramatically increased within the last two decades. Similarly, current understandings of big data as “anything that cannot be easily captured in an Excel spreadsheet” are bound to shift rapidly as new analytic software becomes established, and the very idea of using spreadsheets to capture data becomes a thing of the past. Moreover, data size and speed do not take account of the diversity of data types used by researchers, which may include data that are not generated in digital formats or whose format is not computationally tractable, and which underscores the importance of data provenance (that is, the conditions under which data were generated and disseminated) to processes of inference and interpretation. And as discussed below, the emphasis on physical features of data obscures the continuing dependence of data interpretation on circumstances of data use, including specific queries, values, skills and research situations.

An alternative is to define big data not by reference to their physical attributes, but rather by virtue of what can and cannot be done with them. In this view, big data is a heterogeneous ensemble of data collected from a variety of different sources, typically (but not always) in digital formats suitable for algorithmic processing, in order to generate new knowledge. For example boyd and Crawford (2012: 663) identify big data with “the capacity to search, aggregate and cross-reference large datasets”, while O’Malley and Soyer (2012) focus on the ability to interrogate and interrelate diverse types of data, with the aim to be able to consult them as a single body of evidence. The examples of transformative “big data research” given above are all easily fitted into this view: it is not the mere fact that lots of data are available that makes a different in those cases, but rather the fact that lots of data can be mobilised from a wide variety of sources (medical records, environmental surveys, weather measurements, consumer behaviour). This account makes sense of other characteristic “v-words” that have been associated with big data, including:

  • Variety in the formats and purposes of data, which may include objects as different as samples of animal tissue, free-text observations, humidity measurements, GPS coordinates, and the results of blood tests;
  • Veracity , understood as the extent to which the quality and reliability of big data can be guaranteed. Data with high volume, velocity and variety are at significant risk of containing inaccuracies, errors and unaccounted-for bias. In the absence of appropriate validation and quality checks, this could result in a misleading or outright incorrect evidence base for knowledge claims (Floridi & Illari 2014; Cai & Zhu 2015; Leonelli 2017);
  • Validity , which indicates the selection of appropriate data with respect to the intended use. The choice of a specific dataset as evidence base requires adequate and explicit justification, including recourse to relevant background knowledge to ground the identification of what counts as data in that context (e.g., Loettgers 2009, Bogen 2010);
  • Volatility , i.e., the extent to which data can be relied upon to remain available, accessible and re-interpretable despite changes in archival technologies. This is significant given the tendency of formats and tools used to generate and analyse data to become obsolete, and the efforts required to update data infrastructures so as to guarantee data access in the long term (Bowker 2006; Edwards 2010; Lagoze 2014; Borgman 2015);
  • Value , i.e., the multifaceted forms of significance attributed to big data by different sections of society, which depend as much on the intended use of the data as on historical, social and geographical circumstances (Leonelli 2016, D’Ignazio and Klein 2020). Alongside scientific value, researchers may impute financial, ethical, reputational and even affective value to data, depending on their intended use as well as the historical, social and geographical circumstances of their use. The institutions involved in governing and funding research also have ways of valuing data, which may not always overlap with the priorities of researchers (Tempini 2017).

This list of features, though not exhaustive, highlights how big data is not simply “a lot of data”. The epistemic power of big data lies in their capacity to bridge between different research communities, methodological approaches and theoretical frameworks that are difficult to link due to conceptual fragmentation, social barriers and technical difficulties (Leonelli 2019a). And indeed, appeals to big data often emerge from situations of inquiry that are at once technically, conceptually and socially challenging, and where existing methods and resources have proved insufficient or inadequate (Sterner & Franz 2017; Sterner, Franz, & Witteveen 2020).

This understanding of big data is rooted in a long history of researchers grappling with large and complex datasets, as exemplified by fields like astronomy, meteorology, taxonomy and demography (see the collections assembled by Daston 2017; Anorova et al. 2017; Porter & Chaderavian 2018; as well as Anorova et al. 2010, Sepkoski 2013, Stevens 2016, Strasser 2019 among others). Similarly, biomedical research—and particularly subfields such as epidemiology, pharmacology and public health—has an extensive tradition of tackling data of high volume, velocity, variety and volatility, and whose validity, veracity and value are regularly negotiated and contested by patients, governments, funders, pharmaceutical companies, insurances and public institutions (Bauer 2008). Throughout the twentieth century, these efforts spurred the development of techniques, institutions and instruments to collect, order, visualise and analyse data, such as: standard classification systems and formats; guidelines, tools and legislation for the management and security of sensitive data; and infrastructures to integrate and sustain data collections over long periods of time (Daston 2017).

This work culminated in the application of computational technologies, modelling tools and statistical methods to big data (Porter 1995; Humphreys 2004; Edwards 2010), increasingly pushing the boundaries of data analytics thanks to supervised learning, model fitting, deep neural networks, search and optimisation methods, complex data visualisations and various other tools now associated with artificial intelligence. Many of these tools are based on algorithms whose functioning and results are tested against specific data samples (a process called “training”). These algorithms are programmed to “learn” from each interaction with novel data: in other words, they have the capacity to change themselves in response to new information being inputted into the system, thus becoming more attuned to the phenomena they are analysing and improving their ability to predict future behaviour. The scope and extent of such changes is shaped by the assumptions used to build the algorithms and the capability of related software and hardware to identify, access and process information of relevance to the learning in question. There is however a degree of unpredictability and opacity to these systems, which can evolve to the point of defying human understanding (more on this below).

New institutions, communication platforms and regulatory frameworks also emerged to assemble, prepare and maintain data for such uses (Kitchin 2014), such as various forms of digital data infrastructures, organisations aiming to coordinate and improve the global data landscape (e.g., the Research Data Alliance), and novel measures for data protection, like the General Data Protection Regulation launched in 2017 by the European Union. Together, these techniques and institutions afford the opportunity to assemble and interpret data at a much broader scale, while also promising to deliver finer levels of granularity in data analysis. [ 1 ] They increase the scope of any investigation by making it possible for researchers to link their own findings to those of countless others across the world, both within and beyond the academic sphere. By enhancing the mobility of data, they facilitate their repurposing for a variety of goals that may have been unforeseeable when the data were originally generated. And by transforming the role of data within research, they heighten their status as valuable research outputs in and of themselves. These technological and methodological developments have significant implications for philosophical conceptualisations of data, inferential processes and scientific knowledge, as well as for how research is conducted, organised, governed and assessed. It is to these philosophical concerns that I now turn.

Big data are often associated to the idea of data-driven research, where learning happens through the accumulation of data and the application of methods to extract meaningful patterns from those data. Within data-driven inquiry, researchers are expected to use data as their starting point for inductive inference, without relying on theoretical preconceptions—a situation described by advocates as “the end of theory”, in contrast to theory-driven approaches where research consists of testing a hypothesis (Anderson 2008, Hey et al. 2009). In principle at least, big data constitute the largest pool of data ever assembled and thus a strong starting point to search for correlations (Mayer-Schönberger & Cukier 2013). Crucial to the credibility of the data-driven approach is the efficacy of the methods used to extrapolate patterns from data and evaluate whether or not such patterns are meaningful, and what “meaning” may involve in the first place. Hence, some philosophers and data scholars have argued that

the most important and distinctive characteristic of Big Data [is] its use of statistical methods and computational means of analysis, (Symons & Alvarado 2016: 4)

such as for instance machine learning tools, deep neural networks and other “intelligent” practices of data handling.

The emphasis on statistics as key adjudicator of validity and reliability of patterns extracted from data is not novel. Exponents of logical empiricism looked for logically watertight methods to secure and justify inference from data, and their efforts to develop a theory of probability proceeded in parallel with the entrenchment of statistical reasoning in the sciences in the first half of the twentieth century (Romeijn 2017). In the early 1960s, Patrick Suppes offered a seminal link between statistical methods and the philosophy of science through his work on the production and interpretation of data models. As a philosopher deeply embedded in experimental practice, Suppes was interested in the means and motivations of key statistical procedures for data analysis such as data reduction and curve fitting. He argued that once data are adequately prepared for statistical modelling, all the concerns and choices that motivated data processing become irrelevant to their analysis and interpretation. This inspired him to differentiate between models of theory, models of experiment and models of data, noting that such different components of inquiry are governed by different logics and cannot be compared in a straightforward way. For instance,

the precise definition of models of the data for any given experiment requires that there be a theory of the data in the sense of the experimental procedure, as well as in the ordinary sense of the empirical theory of the phenomena being studied. (Suppes 1962: 253)

Suppes viewed data models as necessarily statistical: that is, as objects

designed to incorporate all the information about the experiment which can be used in statistical tests of the adequacy of the theory. (Suppes 1962: 258)

His formal definition of data models reflects this decision, with statistical requirements such as homogeneity, stationarity and order identified as the ultimate criteria to identify a data model Z and evaluate its adequacy:

Z is an N-fold model of the data for experiment Y if and only if there is a set Y and a probability measure P on subsets of Y such that \(Y = \langle Y, P\rangle\) is a model of the theory of the experiment, Z is an N-tuple of elements of Y , and Z satisfies the statistical tests of homogeneity, stationarity and order. (1962: 259)

This analysis of data models portrayed statistical methods as key conduits between data and theory, and hence as crucial components of inferential reasoning.

The focus on statistics as entry point to discussions of inference from data was widely promoted in subsequent philosophical work. Prominent examples include Deborah Mayo, who in her book Error and the Growth of Experimental Knowledge asked:

What should be included in data models? The overriding constraint is the need for data models that permit the statistical assessment of fit (between prediction and actual data); (Mayo 1996: 136)

and Bas van Fraassen, who also embraced the idea of data models as “summarizing relative frequencies found in data” (Van Fraassen 2008: 167). Closely related is the emphasis on statistics as means to detect error within datasets in relation to specific hypotheses, most prominently endorsed by the error-statistical approach to inference championed by Mayo and Aris Spanos (Mayo & Spanos 2009a). This approach aligns with the emphasis on computational methods for data analysis within big data research, and supports the idea that the better the inferential tools and methods, the better the chance to extract reliable knowledge from data.

When it comes to addressing methodological challenges arising from the computational analysis of big data, however, statistical expertise needs to be complemented by computational savvy in the training and application of algorithms associated to artificial intelligence, including machine learning but also other mathematical procedures for operating upon data (Bringsjord & Govindarajulu 2018). Consider for instance the problem of overfitting, i.e., the mistaken identification of patterns in a dataset, which can be greatly amplified by the training techniques employed by machine learning algorithms. There is no guarantee that an algorithm trained to successfully extrapolate patterns from a given dataset will be as successful when applied to other data. Common approaches to this problem involve the re-ordering and partitioning of both data and training methods, so that it is possible to compare the application of the same algorithms to different subsets of the data (“cross-validation”), combine predictions arising from differently trained algorithms (“ensembling”) or use hyperparameters (parameters whose value is set prior to data training) to prepare the data for analysis.

Handling these issues, in turn, requires

familiarity with the mathematical operations in question, their implementations in code, and the hardware architectures underlying such implementations. (Lowrie 2017: 3)

For instance, machine learning

aims to build programs that develop their own analytic or descriptive approaches to a body of data, rather than employing ready-made solutions such as rule-based deduction or the regressions of more traditional statistics. (Lowrie 2017: 4)

In other words, statistics and mathematics need to be complemented by expertise in programming and computer engineering. The ensemble of skills thus construed results in a specific epistemological approach to research, which is broadly characterised by an emphasis on the means of inquiry as the most significant driver of research goals and outputs. This approach, which Sabina Leonelli characterised as data-centric , involves “focusing more on the processes through which research is carried out than on its ultimate outcomes” (Leonelli 2016: 170). In this view, procedures, techniques, methods, software and hardware are the prime motors of inquiry and the chief influence on its outcomes. Focusing more specifically on computational systems, John Symons and Jack Horner argued that much of big data research consists of software-intensive science rather than data-driven research: that is, science that depends on software for its design, development, deployment and use, and thus encompasses procedures, types of reasoning and errors that are unique to software, such as for example the problems generated by attempts to map real-world quantities to discrete-state machines, or approximating numerical operations (Symons & Horner 2014: 473). Software-intensive science is arguably supported by an algorithmic rationality focused on the feasibility, practicality and efficiency of algorithms, which is typically assessed by reference to concrete situations of inquiry (Lowrie 2017).

Algorithms are enormously varied in their mathematical structures and underpinning conceptual commitments, and more philosophical work needs to be carried out on the specifics of computational tools and software used in data science and related applications—with emerging work in philosophy of computer science providing an excellent way forward (Turner & Angius 2019). Nevertheless, it is clear that whether or not a given algorithm successfully applies to the data at hand depends on factors that cannot be controlled through statistical or even computational methods: for instance, the size, structure and format of the data, the nature of the classifiers used to partition the data, the complexity of decision boundaries and the very goals of the investigation.

In a forceful critique informed by the philosophy of mathematics, Christian Calude and Giuseppe Longo argued that there is a fundamental problem with the assumption that more data will necessarily yield more information:

very large databases have to contain arbitrary correlations. These correlations appear only due to the size, not the nature, of data. (Calude & Longo 2017: 595)

They conclude that big data analysis is by definition unable to distinguish spurious from meaningful correlations and is therefore a threat to scientific research. A related worry, sometimes dubbed “the curse of dimensionality” by data scientists, concerns the extent to which the analysis of a given dataset can be scaled up in complexity and in the number of variables being considered. It is well known that the more dimensions one considers in classifying samples, for example, the larger the dataset on which such dimensions can be accurately generalised. This demonstrates the continuing, tight dependence between the volume and quality of data on the one hand, and the type and breadth of research questions for which data need to serve as evidence on the other hand.

Determining the fit between inferential methods and data requires high levels of expertise and contextual judgement (a situation known within machine learning as the “no free lunch theorem”). Indeed, overreliance on software for inference and data modelling can yield highly problematic results. Symons and Horner note that the use of complex software in big data analysis makes margins of error unknowable, because there is no clear way to test them statistically (Symons & Horner 2014: 473). The path complexity of programs with high conditionality imposes limits on standard error correction techniques. As a consequence, there is no effective method for characterising the error distribution in the software except by testing all paths in the code, which is unrealistic and intractable in the vast majority of cases due to the complexity of the code.

Rather than acting as a substitute, the effective and responsible use of artificial intelligence tools in big data analysis requires the strategic exercise of human intelligence—but for this to happen, AI systems applied to big data need to be accessible to scrutiny and modification. Whether or not this is the case, and who is best qualified to exercise such scrutiny, is under dispute. Thomas Nickles argued that the increasingly complex and distributed algorithms used for data analysis follow in the footsteps of long-standing scientific attempts to transcend the limits of human cognition. The resulting epistemic systems may no longer be intelligible to humans: an “alien intelligence” within which “human abilities are no longer the ultimate criteria of epistemic success” (Nickles forthcoming). Such unbound cognition holds the promise of enabling powerful inferential reasoning from previously unimaginable volumes of data. The difficulties in contextualising and scrutinising such reasoning, however, sheds doubt on the reliability of the results. It is not only machine learning algorithms that are becoming increasingly inaccessible to evaluation: beyond the complexities of programming code, computational data analysis requires a whole ecosystem of classifications, models, networks and inference tools which typically have different histories and purposes, and whose relation to each other—and effects when they are used together—are far from understood and may well be untraceable.

This raises the question of whether the knowledge produced by such data analytic systems is at all intelligible to humans, and if so, what forms of intelligibility it yields. It is certainly the case that deriving knowledge from big data may not involve an increase in human understanding, especially if understanding is understood as an epistemic skill (de Regt 2017). This may not be a problem to those who await the rise of a new species of intelligent machines, who may master new cognitive tools in a way that humans cannot. But as Nickles, Nicholas Rescher (1984), Werner Callebaut (2012) and others pointed out, even in that case “we would not have arrived at perspective-free science” (Nickles forthcoming). While the human histories and assumptions interwoven into these systems may be hard to disentangle, they still affect their outcomes; and whether or not these processes of inquiry are open to critical scrutiny, their telos, implications and significance for life on the planet arguably should be. As argued by Dan McQuillan (2018), the increasing automation of big data analytics may foster acceptance of a Neoplatonist machinic metaphysics , within which mathematical structures “uncovered” by AI would trump any appeal to human experience. Luciano Floridi echoes this intuition in his analysis of what he calls the infosphere :

The great opportunities offered by Information and Communication Technologies come with a huge intellectual responsibility to understand them and take advantage of them in the right way. (2014: vii)

These considerations parallel Paul Humphreys’s long-standing critique of computer simulations as epistemically opaque (Humphreys 2004, 2009)—and particularly his definition of what he calls essential epistemic opacity:

A process is essentially epistemically opaque to X if and only if it is impossible , given the nature of X , for X to know all of the epistemically relevant elements of the process. (Humphreys 2009: 618)

Different facets of the general problem of epistemic opacity are stressed within the vast philosophical scholarship on the role of modelling, computing and simulations in the sciences: the implications of lacking experimental access to the concrete parts of the world being modelled, for instance (Morgan 2005; Parker 2009; Radder 2009); the difficulties in testing the reliability of computational methods used within simulations (Winsberg 2010; Morrison 2015); the relation between opacity and justification (Durán & Formanek 2018); the forms of black-boxing associated to mechanistic reasoning implemented in computational analysis (Craver and Darden 2013; Bechtel 2016); and the debate over the intrinsic limits of computational approaches and related expertise (Collins 1990; Dreyfus 1992). Roman Frigg and Julian Reiss argued that such issues do not constitute fundamental challenges to the nature of inquiry and modelling, and in fact exist in a continuum with traditional methodological issues well-known within the sciences (Frigg & Reiss 2009). Whether or not one agrees with this position (Humphreys 2009; Beisbart 2012), big data analysis is clearly pushing computational and statistical methods to their limit, thus highlighting the boundaries to what even technologically augmented human beings are capable of knowing and understanding.

Research on big data analysis thus sheds light on elements of the research process that cannot be fully controlled, rationalised or even considered through recourse to formal tools.

One such element is the work required to present empirical data in a machine-readable format that is compatible with the software and analytic tools at hand. Data need to be selected, cleaned and prepared to be subjected to statistical and computational analysis. The processes involved in separating data from noise, clustering data so that it is tractable, and integrating data of different formats turn out to be highly sophisticated and theoretically structured, as demonstrated for instance by James McAllister’s (1997, 2007, 2011) and Uljana Feest’s (2011) work on data patterns, Marcel Boumans’s and Leonelli’s comparison of clustering principles across fields (forthcoming), and James Griesemer’s (forthcoming) and Mary Morgan’s (forthcoming) analyses of the peculiarities of datasets. Suppes was so concerned by what he called the “bewildering complexity” of data production and processing activities, that he worried that philosophers would not appreciate the ways in which statistics can and does help scientists to abstract data away from such complexity. He described the large group of research components and activities used to prepare data for modelling as “pragmatic aspects” encompassing “every intuitive consideration of experimental design that involved no formal statistics” (Suppes 1962: 258), and positioned them as the lowest step of his hierarchy of models—at the opposite end of its pinnacle, which are models of theory. Despite recent efforts to rehabilitate the methodology of inductive-statistical modelling and inference (Mayo & Spanos 2009b), this approach has been shared by many philosophers who regard processes of data production and processing as so chaotic as to defy systematic analysis. This explains why data have received so little consideration in philosophy of science when compared to models and theory.

The question of how data are defined and identified, however, is crucial for understanding the role of big data in scientific research. Let us now consider two philosophical views—the representational view and the relational view —that are both compatible with the emergence of big data, and yet place emphasis on different aspects of that phenomenon, with significant implications for understanding the role of data within inferential reasoning and, as we shall see in the next section, as evidence. The representational view construes data as reliable representations of reality which are produced via the interaction between humans and the world. The interactions that generate data can take place in any social setting regardless of research purposes. Examples range from a biologist measuring the circumference of a cell in the lab and noting the result in an Excel file, to a teacher counting the number of students in her class and transcribing it in the class register. What counts as data in these interactions are the objects created in the process of description and/or measurement of the world. These objects can be digital (the Excel file) or physical (the class register) and form a footprint of a specific interaction with the natural world. This footprint—“trace” or “mark”, in the words of Ian Hacking (1992) and Hans-Jörg Rheinberger (2011), respectively—constitutes a crucial reference point for analytic study and for the extraction of new insights. This is the reason why data forms a legitimate foundation to empirical knowledge: the production of data is equivalent to “capturing” features of the world that can be used for systematic study. According to the representative approach, data are objects with fixed and unchangeable content, whose meaning, in virtue of being representations of reality, needs to be investigated and revealed step-by-step through adequate inferential methods. The data documenting cell shape can be modelled to test the relevance of shape to the elasticity, permeability and resilience of cells, producing an evidence base to understand cell-to-cell signalling and development. The data produced counting students in class can be aggregated with similar data collected in other schools, producing an evidence base to evaluate the density of students in the area and their school attendance frequency.

This reflects the intuition that data, especially when they come in the form of numerical measurements or images such as photographs, somehow mirror the phenomena that they are created to document, thus providing a snapshot of those phenomena that is amenable to study under the controlled conditions of research. It also reflects the idea of data as “raw” products of research, which are as close as it gets to unmediated knowledge of reality. This makes sense of the truth-value sometimes assigned to data as irrefutable sources of evidence—the Popperian idea that if data are found to support a given claim, then that claim is corroborated as true at least as long as no other data are found to disprove it. Data in this view represent an objective foundation for the acquisition of knowledge and this very objectivity—the ability to derive knowledge from human experience while transcending it—is what makes knowledge empirical. This position is well-aligned with the idea that big data is valuable to science because it facilitates the (broadly understood) inductive accumulation of knowledge: gathering data collected via reliable methods produces a mountain of facts ready to be analysed and, the more facts are produced and connected with each other, the more knowledge can be extracted.

Philosophers have long acknowledged that data do not speak for themselves and different types of data require different tools for analysis and preparation to be interpreted (Bogen 2009 [2013]). According to the representative view, there are correct and incorrect ways of interpreting data, which those responsible for data analysis need to uncover. But what is a “correct” interpretation in the realm of big data, where data are consistently treated as mobile entities that can, at least in principle, be reused in countless different ways and towards different objectives? Perhaps more than at any other time in the history of science, the current mobilisation and re-use of big data highlights the degree to which data interpretation—and with it, whatever data is taken to represent—may differ depending on the conceptual, material and social conditions of inquiry. The analysis of how big data travels across contexts shows that the expectations and abilities of those involved determine not only the way data are interpreted, but also what is regarded as “data” in the first place (Leonelli & Tempini forthcoming). The representative view of data as objects with fixed and contextually independent meaning is at odds with these observations.

An alternative approach is to embrace these findings and abandon the idea of data as fixed representations of reality altogether. Within the relational view , data are objects that are treated as potential or actual evidence for scientific claims in ways that can, at least in principle, be scrutinised and accounted for (Leonelli 2016). The meaning assigned to data depends on their provenance, their physical features and what these features are taken to represent, and the motivations and instruments used to visualise them and to defend specific interpretations. The reliability of data thus depends on the credibility and strictness of the processes used to produce and analyse them. The presentation of data; the way they are identified, selected, and included (or excluded) in databases; and the information provided to users to re-contextualise them are fundamental to producing knowledge and significantly influence its content. For instance, changes in data format—as most obviously involved in digitisation, data compression or archival procedures— can have a significant impact on where, when, and who uses the data as source of knowledge.

This framework acknowledges that any object can be used as a datum, or stop being used as such, depending on the circumstances—a consideration familiar to big data analysts used to pick and mix data coming from a vast variety of sources. The relational view also explains how, depending on the research perspective interpreting it, the same dataset may be used to represent different aspects of the world (“phenomena” as famously characterised by James Bogen and James Woodward, 1988). When considering the full cycle of scientific inquiry from the viewpoint of data production and analysis, it is at the stage of data modelling that a specific representational value is attributed to data (Leonelli 2019b).

The relational view of data encourages attention to the history of data, highlighting their continual evolution and sometimes radical alteration, and the impact of this feature on the power of data to confirm or refute hypotheses. It explains the critical importance of documenting data management and transformation processes, especially with big data that transit far and wide over digital channels and are grouped and interpreted in different ways and formats. It also explains the increasing recognition of the expertise of those who produce, curate, and analyse data as indispensable to the effective interpretation of big data within and beyond the sciences; and the inextricable link between social and ethical concerns around the potential impact of data sharing and scientific concerns around the quality, validity, and security of data (boyd & Crawford 2012; Tempini & Leonelli, 2018).

Depending on which view on data one takes, expectations around what big data can do for science will vary dramatically. The representational view accommodates the idea of big data as providing the most comprehensive, reliable and generative knowledge base ever witnessed in the history of science, by virtue of its sheer size and heterogeneity. The relational view makes no such commitment, focusing instead on what inferences are being drawn from such data at any given point, how and why.

One thing that the representational and relational views agree on is the key epistemic role of data as empirical evidence for knowledge claims or interventions. While there is a large philosophical literature on the nature of evidence (e.g., Achinstein 2001; Reiss 2015; Kelly 2016), however, the relation between data and evidence has received less attention. This is arguably due to an implicit acceptance, by many philosophers, of the representational view of data. Within the representational view, the identification of what counts as data is prior to the study of what those data can be evidence for: in other words, data are “givens”, as the etymology of the word indicates, and inferential methods are responsible for determining whether and how the data available to investigators can be used as evidence, and for what. The focus of philosophical attention is thus on formal methods to single out errors and misleading interpretations, and the probabilistic and/or explanatory relation between what is unproblematically taken to be a body of evidence and a given hypothesis. Hence much of the expansive philosophical work on evidence avoids the term “data” altogether. Peter Achinstein’s seminal work is a case in point: it discusses observed facts and experimental results, and whether and under which conditions scientists would have reasons to believe such facts, but it makes no mention of data and related processing practices (Achinstein 2001).

By contrast, within the relational view an object can only be identified as datum when it is viewed as having value as evidence. Evidence becomes a category of data identification, rather than a category of data use as in the representational view (Canali 2019). Evidence is thus constitutive of the very notion of data and cannot be disentangled from it. This involves accepting that the conditions under which a given object can serve as evidence—and thus be viewed as datum - may change; and that should this evidential role stop altogether, the object would revert back into an ordinary, non-datum item. For example, the photograph of a plant taken by a tourist in a remote region may become relevant as evidence for an inquiry into the morphology of plants from that particular locality; yet most photographs of plants are never considered as evidence for an inquiry into the features and functioning of the world, and of those who are, many may subsequently be discarded as uninteresting or no longer pertinent to the questions being asked.

This view accounts for the mobility and repurposing that characterises big data use, and for the possibility that objects that were not originally generated in order to serve as evidence may be subsequently adopted as such. Consider Mayo and Spanos’s “minimal scientific principle for evidence”, which they define as follows:

Data x 0 provide poor evidence for H if they result from a method or procedure that has little or no ability of finding flaws in H , even if H is false. (Mayo & Spanos 2009b)

This principle is compatible with the relational view of data since it incorporates cases where the methods used to generate and process data may not have been geared towards the testing of a hypothesis H: all it asks is that such methods can be made relevant to the testing of H, at the point in which data are used as evidence for H (I shall come back to the role of hypotheses in the handling of evidence in the next section).

The relational view also highlights the relevance of practices of data formatting and manipulation to the treatment of data as evidence, thus taking attention away from the characteristics of the data objects alone and focusing instead on the agency attached to and enabled by those characteristics. Nora Boyd has provided a way to conceptualise data processing as an integral part of inferential processes, and thus of how we should understand evidence. To this aim she introduced the notion of “line of evidence”, which she defines as:

a sequence of empirical results including the records of data collection and all subsequent products of data processing generated on the way to some final empirical constraint. (Boyd 2018:406)

She thus proposes a conception of evidence that embraces both data and the way in which data are handled, and indeed emphasises the importance of auxiliary information used when assessing data for interpretation, which includes

the metadata regarding the provenance of the data records and the processing workflow that transforms them. (2018: 407)

As she concludes,

together, a line of evidence and its associated metadata compose what I am calling an “enriched line of evidence”. The evidential corpus is then to be made up of many such enriched lines of evidence. (2018: 407)

The relational view thus fosters a functional and contextualist approach to evidence as the manner through which one or more objects are used as warrant for particular knowledge items (which can be propositional claims, but also actions such as specific decisions or modes of conduct/ways of operating). This chimes with the contextual view of evidence defended by Reiss (2015), John Norton’s work on the multiple, tangled lines of inferential reasoning underpinning appeals to induction (2003), and Hasok Chang’s emphasis on the epistemic activities required to ground evidential claims (2012). Building on these ideas and on Stephen Toulmin’s seminal work on research schemas (1958), Alison Wylie has gone one step further in evaluating the inferential scaffolding that researchers (and particularly archaeologists, who so often are called to re-evaluate the same data as evidence for new claims; Wylie 2017) need to make sense of their data, interpret them in ways that are robust to potential challenges, and modify interpretations in the face of new findings. This analysis enabled Wylie to formulate a set of conditions for robust evidential reasoning, which include epistemic security in the chain of evidence, causal anchoring and causal independence of the data used as evidence, as well as the explicit articulation of the grounds for calibration of the instruments and methods involved (Chapman & Wylie 2016; Wylie forthcoming). A similar conclusion is reached by Jessey Wright’s evaluation of the diverse data analysis techniques that neuroscientists use to make sense of functional magnetic resonance imaging of the brain (fMRI scans):

different data analysis techniques reveal different patterns in the data. Through the use of multiple data analysis techniques, researchers can produce results that are locally robust. (Wright 2017: 1179)

Wylie’s and Wright’s analyses exemplify how a relational approach to data fosters a normative understanding of “good evidence” which is anchored in situated judgement—the arguably human prerogative to contextualise and assess the significance of evidential claims. The advantages of this view of evidence are eloquently expressed by Nancy Cartwright’s critique of both philosophical theories and policy approaches that do not recognise the local and contextual nature of evidential reasoning. As she notes,

we need a concept that can give guidance about what is relevant to consider in deciding on the probability of the hypothesis, not one that requires that we already know significant facts about the probability of the hypothesis on various pieces of evidence. (Cartwright 2013: 6)

Thus she argues for a notion of evidence that is not too restrictive, takes account of the difficulties in combining and selecting evidence, and allows for contextual judgement on what types of evidence are best suited to the inquiry at hand (Cartwright 2013, 2019). Reiss’s proposal of a pragmatic theory of evidence similarly aims to

takes scientific practice [..] seriously, both in terms of its greater use of knowledge about the conditions under which science is practised and in terms of its goal to develop insights that are relevant to practising scientists. (Reiss 2015: 361)

A better characterisation of the relation between data and evidence, predicated on the study of how data are processed and aggregated, may go a long way towards addressing these demands. As aptly argued by James Woodward, the evidential relationship between data and claims is not a “a purely formal, logical, or a priori matter” (Woodward 2000: S172–173). This again sits uneasily with the expectation that big data analysis may automate scientific discovery and make human judgement redundant.

Let us now return to the idea of data-driven inquiry, often suggested as a counterpoint to hypothesis-driven science (e.g., Hey et al. 2009). Kevin Elliot and colleagues have offered a brief history of hypothesis-driven inquiry (Elliott et al. 2016), emphasising how scientific institutions (including funding programmes and publication venues) have pushed researchers towards a Popperian conceptualisation of inquiry as the formulation and testing of a strong hypothesis. Big data analysis clearly points to a different and arguably Baconian understanding of the role of hypothesis in science. Theoretical expectations are no longer seen as driving the process of inquiry and empirical input is recognised as primary in determining the direction of research and the phenomena—and related hypotheses—considered by researchers.

The emphasis on data as a central component of research poses a significant challenge to one of the best-established philosophical views on scientific knowledge. According to this view, which I shall label the theory-centric view of science, scientific knowledge consists of justified true beliefs about the world. These beliefs are obtained through empirical methods aiming to test the validity and reliability of statements that describe or explain aspects of reality. Hence scientific knowledge is conceptualised as inherently propositional: what counts as an output are claims published in books and journals, which are also typically presented as solutions to hypothesis-driven inquiry. This view acknowledges the significance of methods, data, models, instruments and materials within scientific investigations, but ultimately regards them as means towards one end: the achievement of true claims about the world. Reichenbach’s seminal distinction between contexts of discovery and justification exemplifies this position (Reichenbach 1938). Theory-centrism recognises research components such as data and related practical skills as essential to discovery, and more specifically to the messy, irrational part of scientific work that involves value judgements, trial-and-error, intuition and exploration and within which the very phenomena to be investigated may not have been stabilised. The justification of claims, by contrast, involves the rational reconstruction of the research that has been performed, so that it conforms to established norms of inferential reasoning. Importantly, within the context of justification, only data that support the claims of interest are explicitly reported and discussed: everything else—including the vast majority of data produced in the course of inquiry—is lost to the chaotic context of discovery. [ 2 ]

Much recent philosophy of science, and particularly modelling and experimentation, has challenged theory-centrism by highlighting the role of models, methods and modes of intervention as research outputs rather than simple tools, and stressing the importance of expanding philosophical understandings of scientific knowledge to include these elements alongside propositional claims. The rise of big data offers another opportunity to reframe understandings of scientific knowledge as not necessarily centred on theories and to include non-propositional components—thus, in Cartwright’s paraphrase of Gilbert Ryle’s famous distinction, refocusing on knowing-how over knowing-that (Cartwright 2019). One way to construe data-centric methods is indeed to embrace a conception of knowledge as ability, such as promoted by early pragmatists like John Dewey and more recently reprised by Chang, who specifically highlighted it as the broader category within which the understanding of knowledge-as-information needs to be placed (Chang 2017).

Another way to interpret the rise of big data is as a vindication of inductivism in the face of the barrage of philosophical criticism levelled against theory-free reasoning over the centuries. For instance, Jon Williamson (2004: 88) has argued that advances in automation, combined with the emergence of big data, lend plausibility to inductivist philosophy of science. Wolfgang Pietsch agrees with this view and provided a sophisticated framework to understand just what kind of inductive reasoning is instigated by big data and related machine learning methods such as decision trees (Pietsch 2015). Following John Stuart Mill, he calls this approach variational induction and presents it as common to both big data approaches and exploratory experimentation, though the former can handle a much larger number of variables (Pietsch 2015: 913). Pietsch concludes that the problem of theory-ladenness in machine learning can be addressed by determining under which theoretical assumptions variational induction works (2015: 910ff).

Others are less inclined to see theory-ladenness as a problem that can be mitigated by data-intensive methods, and rather see it as a constitutive part of the process of empirical inquiry. Arching back to the extensive literature on perspectivism and experimentation (Gooding 1990; Giere 2006; Radder 2006; Massimi 2012), Werner Callebaut has forcefully argued that the most sophisticated and standardised measurements embody a specific theoretical perspective, and this is no less true of big data (Callebaut 2012). Elliott and colleagues emphasise that conceptualising big data analysis as atheoretical risks encouraging unsophisticated attitudes to empirical investigation as a

“fishing expedition”, having a high probability of leading to nonsense results or spurious correlations, being reliant on scientists who do not have adequate expertise in data analysis, and yielding data biased by the mode of collection. (Elliott et al. 2016: 880)

To address related worries in genetic analysis, Ken Waters has provided the useful characterisation of “theory-informed” inquiry (Waters 2007), which can be invoked to stress how theory informs the methods used to extract meaningful patterns from big data, and yet does not necessarily determine either the starting point or the outcomes of data-intensive science. This does not resolve the question of what role theory actually plays. Rob Kitchin (2014) has proposed to see big data as linked to a new mode of hypothesis generation within a hypothetical-deductive framework. Leonelli is more sceptical of attempts to match big data approaches, which are many and diverse, with a specific type of inferential logic. She rather focused on the extent to which the theoretical apparatus at work within big data analysis rests on conceptual decisions about how to order and classify data—and proposed that such decisions can give rise to a particular form of theorization, which she calls classificatory theory (Leonelli 2016).

These disagreements point to big data as eliciting diverse understandings of the nature of knowledge and inquiry, and the complex iterations through which different inferential methods build on each other. Again, in the words of Elliot and colleagues,

attempting to draw a sharp distinction between hypothesis-driven and data-intensive science is misleading; these modes of research are not in fact orthogonal and often intertwine in actual scientific practice. (Elliott et al. 2016: 881, see also O’Malley et al. 2009, Elliott 2012)

Another epistemological debate strongly linked to reflection on big data concerns the specific kinds of knowledge emerging from data-centric forms of inquiry, and particularly the relation between predictive and causal knowledge.

Big data science is widely seen as revolutionary in the scale and power of predictions that it can support. Unsurprisingly perhaps, a philosophically sophisticated defence of this position comes from the philosophy of mathematics, where Marco Panza, Domenico Napoletani and Daniele Struppa argued for big data science as occasioning a momentous shift in the predictive knowledge that mathematical analysis can yield, and thus its role within broader processes of knowledge production. The whole point of big data analysis, they posit, is its disregard for causal knowledge:

answers are found through a process of automatic fitting of the data to models that do not carry any structural understanding beyond the actual solution of the problem itself. (Napoletani, Panza, & Struppa 2014: 486)

This view differs from simplistic popular discourse on “the death of theory” (Anderson 2008) and the “power of correlations” (Mayer-Schoenberg and Cukier 2013) insofar as it does not side-step the constraints associated with knowledge and generalisations that can be extracted from big data analysis. Napoletani, Panza and Struppa recognise that there are inescapable tensions around the ability of mathematical reasoning to overdetermine empirical input, to the point of providing a justification for any and every possible interpretation of the data. In their words,

the problem arises of how we can gain meaningful understanding of historical phenomena, given the tremendous potential variability of their developmental processes. (Napoletani et al. 2014: 487)

Their solution is to clarify that understanding phenomena is not the goal of predictive reasoning, which is rather a form of agnostic science : “the possibility of forecasting and analysing without a structured and general understanding” (Napoletani et al. 2011: 12). The opacity of algorithmic rationality thus becomes its key virtue and the reason for the extraordinary epistemic success of forecasting grounded on big data. While “the phenomenon may forever re-main hidden to our understanding”(ibid.: 5), the application of mathematical models and algorithms to big data can still provide meaningful and reliable answers to well-specified problems—similarly to what has been argued in the case of false models (Wimsatt 2007). Examples include the use of “forcing” methods such as regularisation or diffusion geometry to facilitate the extraction of useful insights from messy datasets.

This view is at odds with accounts that posit scientific understanding as a key aim of science (de Regt 2017), and the intuition that what researchers are ultimately interested in is

whether the opaque data-model generated by machine-learning technologies count as explanations for the relationships found between input and output. (Boon 2020: 44)

Within the philosophy of biology, for example, it is well recognised that big data facilitates effective extraction of patterns and trends, and that being able to model and predict how an organism or ecosystem may behave in the future is of great importance, particularly within more applied fields such as biomedicine or conservation science. At the same time, researchers are interested in understanding the reasons for observed correlations, and typically use predictive patterns as heuristics to explore, develop and verify causal claims about the structure and functioning of entities and processes. Emanuele Ratti (2015) has argued that big data mining within genome-wide association studies often used in cancer genomics can actually underpin mechanistic reasoning, for instance by supporting eliminative inference to develop mechanistic hypotheses and by helping to explore and evaluate generalisations used to analyse the data. In a similar vein, Pietsch (2016) proposed to use variational induction as a method to establish what counts as causal relationships among big data patterns, by focusing on which analytic strategies allow for reliable prediction and effective manipulation of a phenomenon.

Through the study of data sourcing and processing in epidemiology, Stefano Canali has instead highlighted the difficulties of deriving mechanistic claims from big data analysis, particularly where data are varied and embodying incompatible perspectives and methodological approaches (Canali 2016, 2019). Relatedly, the semantic and logistical challenges of organising big data give reason to doubt the reliability of causal claims extracted from such data. In terms of logistics, having a lot of data is not the same as having all of them, and cultivating illusions of comprehensiveness is a risky and potentially misleading strategy, particularly given the challenges encountered in developing and applying curatorial standards for data other than the high-throughput results of “omics” approaches (see also the next section). The constant worry about the partiality and reliability of data is reflected in the care put by database curators in enabling database users to assess such properties; and in the importance given by researchers themselves, particularly in the biological and environmental sciences, to evaluating the quality of data found on the internet (Leonelli 2014, Fleming et al. 2017). In terms of semantics, we are back to the role of data classifications as theoretical scaffolding for big data analysis that we discussed in the previous section. Taxonomic efforts to order and visualise data inform causal reasoning extracted from such data (Sterner & Franz 2017), and can themselves constitute a bottom-up method—grounded in comparative reasoning—for assigning meaning to data models, particularly in situation where a full-blown theory or explanation for the phenomenon under investigation is not available (Sterner 2014).

It is no coincidence that much philosophical work on the relation between causal and predictive knowledge extracted from big data comes from the philosophy of the life sciences, where the absence of axiomatized theories has elicited sophisticated views on the diversity of forms and functions of theory within inferential reasoning. Moreover, biological data are heterogeneous both in their content and in their format; are curated and re-purposed to address the needs of highly disparate and fragmented epistemic communities; and present curators with specific challenges to do with tracking complex, diverse and evolving organismal structures and behaviours, whose relation to an ever-changing environment is hard to pinpoint with any stability (e.g., Shavit & Griesemer 2009). Hence in this domain, some of the core methods and epistemic concerns of experimental research—including exploratory experimentation, sampling and the search for causal mechanisms—remain crucial parts of data-centric inquiry.

At the start of this entry I listed “value” as a major characteristic of big data and pointed to the crucial role of valuing procedures in identifying, processing, modelling and interpreting data as evidence. Identifying and negotiating different forms of data value is an unavoidable part of big data analysis, since these valuation practices determine which data is made available to whom, under which conditions and for which purposes. What researchers choose to consider as reliable data (and data sources) is closely intertwined not only with their research goals and interpretive methods, but also with their approach to data production, packaging, storage and sharing. Thus, researchers need to consider what value their data may have for future research by themselves and others, and how to enhance that value—such as through decisions around which data to make public, how, when and in which format; or, whenever dealing with data already in the public domain (such as personal data on social media), decisions around whether the data should be shared and used at all, and how.

No matter how one conceptualises value practices, it is clear that their key role in data management and analysis prevents facile distinctions between values and “facts” (understood as propositional claims for which data provide evidential warrant). For example, consider a researcher who values both openness —and related practices of widespread data sharing—and scientific rigour —which requires a strict monitoring of the credibility and validity of conditions under which data are interpreted. The scale and manner of big data mobilisation and analysis create tensions between these two values. While the commitment to openness may prompt interest in data sharing, the commitment to rigour may hamper it, since once data are freely circulated online it becomes very difficult to retain control over how they are interpreted, by whom and with which knowledge, skills and tools. How a researcher responds to this conflict affects which data are made available for big data analysis, and under which conditions. Similarly, the extent to which diverse datasets may be triangulated and compared depends on the intellectual property regimes under which the data—and related analytic tools—have been produced. Privately owned data are often unavailable to publicly funded researchers; and many algorithms, cloud systems and computing facilities used in big data analytics are only accessible to those with enough resources to buy relevant access and training. Whatever claims result from big data analysis are, therefore, strongly dependent on social, financial and cultural constraints that condition the data pool and its analysis.

This prominent role of values in shaping data-related epistemic practices is not surprising given existing philosophical critiques of the fact/value distinction (e.g., Douglas 2009), and the existing literature on values in science—such as Helen Longino’s seminal distinction between constitutive and contextual values, as presented in her 1990 book Science as Social Knowledge —may well apply in this case too. Similarly, it is well-established that the technological and social conditions of research strongly condition its design and outcomes. What is particularly worrying in the case of big data is the temptation, prompted by hyped expectations around the power of data analytics, to hide or side-line the valuing choices that underpin the methods, infrastructures and algorithms used for big data extraction.

Consider the use of high-throughput data production tools, which enable researchers to easily generate a large volume of data in formats already geared to computational analysis. Just as in the case of other technologies, researchers have a strong incentive to adopt such tools for data generation; and may do so even in cases where such tools are not good or even appropriate means to pursue the investigation. Ulrich Krohs uses the term convenience experimentation to refer to experimental designs that are adopted not because they are the most appropriate ways of pursuing a given investigation, but because they are easily and widely available and usable, and thus “convenient” means for researchers to pursue their goals (Krohs 2012).

Appeals to convenience can extend to other aspects of data-intensive analysis. Not all data are equally easy to digitally collect, disseminate and link through existing algorithms, which makes some data types and formats more convenient than others for computational analysis. For example, research databases often display the outputs of well-resourced labs within research traditions which deal with “tractable” data formats (such as “omics”). And indeed, the existing distribution of resources, infrastructure and skills determines high levels of inequality in the production, dissemination and use of big data for research. Big players with large financial and technical resources are leading the development and uptake of data analytics tools, leaving much publicly funded research around the world at the receiving end of innovation in this area. Contrary to popular depictions of the data revolution as harbinger of transparency, democracy and social equality, the digital divide between those who can access and use data technologies, and those who cannot, continues to widen. A result of such divides is the scarcity of data relating to certain subgroups and geographical locations, which again limits the comprehensiveness of available data resources.

In the vast ecosystem of big data infrastructures, it is difficult to keep track of such distortions and assess their significance for data interpretation, especially in situations where heterogeneous data sources structured through appeal to different values are mashed together. Thus, the systematic aggregation of convenient datasets and analytic tools over others often results in a big data pool where the relevant sources and forms of bias are impossible to locate and account for (Pasquale 2015; O’Neill 2016; Zuboff 2017; Leonelli 2019a). In such a landscape, arguments for a separation between fact and value—and even a clear distinction between the role of epistemic and non-epistemic values in knowledge production—become very difficult to maintain without discrediting the whole edifice of big data science. Given the extent to which this approach has penetrated research in all domains, it is arguably impossible, however, to critique the value-laden structure of big data science without calling into question the legitimacy of science itself. A more constructive approach is to embrace the extent to which big data science is anchored in human choices, interests and values, and ascertain how this affects philosophical views on knowledge, truth and method.

In closing, it is important to consider at least some of the risks and related ethical questions raised by research with big data. As already mentioned in the previous section, reliance on big data collected by powerful institutions or corporations risks raises significant social concerns. Contrary to the view that sees big and open data as harbingers of democratic social participation in research, the way that scientific research is governed and financed is not challenged by big data. Rather, the increasing commodification and large value attributed to certain kinds of data (e.g., personal data) is associated to an increase in inequality of power and visibility between different nations, segments of the population and scientific communities (O’Neill 2016; Zuboff 2017; D’Ignazio and Klein 2020). The digital gap between those who not only can access data, but can also use it, is widening, leading from a state of digital divide to a condition of “data divide” (Bezuidenout et al. 2017).

Moreover, the privatisation of data has serious implications for the world of research and the knowledge it produces. Firstly, it affects which data are disseminated, and with which expectations. Corporations usually only release data that they regard as having lesser commercial value and that they need public sector assistance to interpret. This introduces another distortion on the sources and types of data that are accessible online while more expensive and complex data are kept secret. Even many of the ways in which citizens -researchers included - are encouraged to interact with databases and data interpretation sites tend to encourage participation that generates further commercial value. Sociologists have recently described this type of social participation as a form of exploitation (Prainsack & Buyx 2017; Srnicek 2017). In turn, these ways of exploiting data strengthen their economic value over their scientific value. When it comes to the commerce of personal data between companies working in analysis, the value of the data as commercial products -which includes the evaluation of the speed and efficiency with which access to certain data can help develop new products - often has priority over scientific issues such as for example, representativity and reliability of the data and the ways they were analysed. This can result in decisions that pose a problem scientifically or that simply are not interested in investigating the consequences of the assumptions made and the processes used. This lack of interest easily translates into ignorance of discrimination, inequality and potential errors in the data considered. This type of ignorance is highly strategic and economically productive since it enables the use of data without concerns over social and scientific implications. In this scenario the evaluation on the quality of data shrinks to an evaluation of their usefulness towards short-term analyses or forecasting required by the client. There are no incentives in this system to encourage evaluation of the long-term implications of data analysis. The risk here is that the commerce of data is accompanied by an increasing divergence between data and their context. The interest in the history of the transit of data, the plurality of their emotional or scientific value and the re-evaluation of their origins tend to disappear over time, to be substituted by the increasing hold of the financial value of data.

The multiplicity of data sources and tools for aggregation also creates risks. The complexity of the data landscape is making it harder to identify which parts of the infrastructure require updating or have been put in doubt by new scientific developments. The situation worsens when considering the number of databases that populate every area of scientific research, each containing assumptions that influence the circulation and interoperability of data and that often are not updated in a reliable and regular way. Just to provide an idea of the numbers involved, the prestigious scientific publication Nucleic Acids Research publishes a special issue on new databases that are relevant to molecular biology every year and included: 56 new infrastructures in 2015, 62 in 2016, 54 in 2017 and 82 in 2018. These are just a small proportion of the hundreds of databases that are developed each year in the life sciences sector alone. The fact that these databases rely on short term funding means that a growing percentage of resources remain available to consult online although they are long dead. This is a condition that is not always visible to users of the database who trust them without checking whether they are actively maintained or not. At what point do these infrastructures become obsolete? What are the risks involved in weaving an ever more extensive tapestry of infrastructures that depend on each other, given the disparity in the ways they are managed and the challenges in identifying and comparing their prerequisite conditions, the theories and scaffolding used to build them? One of these risks is rampant conservativism: the insistence on recycling old data whose features and management elements become increasingly murky as time goes by, instead of encouraging the production of new data with features that specifically respond to the requirements and the circumstances of their users. In disciplines such as biology and medicine that study living beings and therefore are by definition continually evolving and developing, such trust in old data is particularly alarming. It is not the case, for example, that data collected on fungi ten, twenty or even a hundred years ago is reliable to explain the behaviour of the same species of fungi now or in the future (Leonelli 2018).

Researchers of what Luciano Floridi calls the infosphere —the way in which the introduction of digital technologies is changing the world - are becoming aware of the destructive potential of big data and the urgent need to focus efforts for management and use of data in active and thoughtful ways towards the improvement of the human condition. In Floridi’s own words:

ICT yields great opportunity which, however, entails the enormous intellectual responsibility of understanding this technology to use it in the most appropriate way. (Floridi 2014: vii; see also British Academy & Royal Society 2017)

In light of these findings, it is essential that ethical and social issues are seen as a core part of the technical and scientific requirements associated with data management and analysis. The ethical management of data is not obtained exclusively by regulating the commerce of research and management of personal data nor with the introduction of monitoring of research financing, even though these are important strategies. To guarantee that big data are used in the most scientifically and socially forward-thinking way it is necessary to transcend the concept of ethics as something external and alien to research. An analysis of the ethical implications of data science should become a basic component of the background and activity of those who take care of data and the methods used to view and analyse it. Ethical evaluations and choices are hidden in every aspect of data management, including those choices that may seem purely technical.

This entry stressed how the emerging emphasis on big data signals the rise of a data-centric approach to research, in which efforts to mobilise, integrate, disseminate and visualise data are viewed as central contributions to discovery. The emergence of data-centrism highlights the challenges involved in gathering, classifying and interpreting data, and the concepts, technologies and institutions that surround these processes. Tools such as high-throughput measurement instruments and apps for smartphones are fast generating large volumes of data in digital formats. In principle, these data are immediately available for dissemination through internet platforms, which can make them accessible to anybody with a broadband connection in a matter of seconds. In practice, however, access to data is fraught with conceptual, technical, legal and ethical implications; and even when access can be granted, it does not guarantee that the data can be fruitfully used to spur further research. Furthermore, the mathematical and computational tools developed to analyse big data are often opaque in their functioning and assumptions, leading to results whose scientific meaning and credibility may be difficult to assess. This increases the worry that big data science may be grounded upon, and ultimately supporting, the process of making human ingenuity hostage to an alien, artificial and ultimately unintelligible intelligence.

Perhaps the most confronting aspect of big data science as discussed in this entry is the extent to which it deviates from understandings of rationality grounded on individual agency and cognitive abilities (on which much of contemporary philosophy of science is predicated). The power of any one dataset to yield knowledge lies in the extent to which it can be linked with others: this is what lends high epistemic value to digital objects such as GPS locations or sequencing data, and what makes extensive data aggregation from a variety of sources into a highly effective surveillance tool. Data production and dissemination channels such as social media, governmental databases and research repositories operate in a globalised, interlinked and distributed network, whose functioning requires a wide variety of skills and expertise. The distributed nature of decision-making involved in developing big data infrastructures and analytics makes it impossible for any one individual to retain oversight over the quality, scientific significance and potential social impact of the knowledge being produced.

Big data analysis may therefore constitute the ultimate instance of a distributed cognitive system. Where does this leave accountability questions? Many individuals, groups and institutions end up sharing responsibility for the conceptual interpretation and social outcomes of specific data uses. A key challenge for big data governance is to find mechanisms for allocating responsibilities across this complex network, so that erroneous and unwarranted decisions—as well as outright fraudulent, unethical, abusive, discriminatory or misguided actions—can be singled out, corrected and appropriately sanctioned. Thinking about the complex history, processing and use of data can encourage philosophers to avoid ahistorical, uncontextualized approaches to questions of evidence, and instead consider the methods, skills, technologies and practices involved in handling data—and particularly big data—as crucial to understanding empirical knowledge-making.

  • Achinstein, Peter, 2001, The Book of Evidence , Oxford: Oxford University Press. doi:10.1093/0195143892.001.0001
  • Anderson, Chris, 2008, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”, Wired Magazine , 23 June 2008.
  • Aronova, Elena, Karen S. Baker, and Naomi Oreskes, 2010, “Big science and big data in biology: From the International Geophysical Year through the International Biological Program to the Long Term Ecological Research (LTER) Network, 1957–present”, Historical Studies in the Natural Sciences , 40: 183–224.
  • Aronova, Elena, Christine von Oertzen, and David Sepkoski, 2017, “Introduction: Historicizing Big Data”, Osiris , 32(1): 1–17. doi:10.1086/693399
  • Bauer, Susanne, 2008, “Mining Data, Gathering Variables and Recombining Information: The Flexible Architecture of Epidemiological Studies”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 39(4): 415–428. doi:10.1016/j.shpsc.2008.09.008
  • Bechtel, William, 2016, “Using Computational Models to Discover and Understand Mechanisms”, Studies in History and Philosophy of Science Part A , 56: 113–121. doi:10.1016/j.shpsa.2015.10.004
  • Beisbart, Claus, 2012, “How Can Computer Simulations Produce New Knowledge?”, European Journal for Philosophy of Science , 2(3): 395–434. doi:10.1007/s13194-012-0049-7
  • Bezuidenhout, Louise, Leonelli, Sabina, Kelly, Ann and Rappert, Brian, 2017, “Beyond the Digital Divide: Towards a Situated Approach to Open Data”. Science and Public Policy , 44(4): 464–475. doi: 10.1093/scipol/scw036
  • Bogen, Jim, 2009 [2013], “Theory and Observation in Science”, in The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2013/entries/science-theory-observation/ >.
  • –––, 2010, “Noise in the World”, Philosophy of Science , 77(5): 778–791. doi:10.1086/656006
  • Bogen, James and James Woodward, 1988, “Saving the Phenomena”, The Philosophical Review , 97(3): 303. doi:10.2307/2185445
  • Bokulich, Alisa, 2018, “Using Models to Correct Data: Paleodiversity and the Fossil Record”, in S.I.: Abstraction and Idealization in Scientific Modelling by Synthese , 29 May 2018. doi:10.1007/s11229-018-1820-x
  • Boon, Mieke, 2020, “How Scientists Are Brought Back into Science—The Error of Empiricism”, in A Critical Reflection on Automated Science , Marta Bertolaso and Fabio Sterpetti (eds.), (Human Perspectives in Health Sciences and Technology 1), Cham: Springer International Publishing, 43–65. doi:10.1007/978-3-030-25001-0_4
  • Borgman, Christine L., 2015, Big Data, Little Data, No Data , Cambridge, MA: MIT Press.
  • Boumans, M.J. and Sabina Leonelli, forthcoming, “From Dirty Data to Tidy Facts: Practices of Clustering in Plant Phenomics and Business Cycles”, in Leonelli and Tempini forthcoming.
  • Boyd, Danah and Kate Crawford, 2012, “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon”, Information, Communication & Society , 15(5): 662–679. doi:10.1080/1369118X.2012.678878
  • Boyd, Nora Mills, 2018, “Evidence Enriched”, Philosophy of Science , 85(3): 403–421. doi:10.1086/697747
  • Bowker, Geoffrey C., 2006, Memory Practices in the Sciences , Cambridge, MA: The MIT Press.
  • Bringsjord, Selmer and Naveen Sundar Govindarajulu, 2018, “Artificial Intelligence”, in The Stanford Encyclopedia of Philosophy (Fall 2018 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/fall2018/entries/artificial-intelligence/ >.
  • British Academy & Royal Society, 2017, Data Management and Use: Governance in the 21st Century. A Joint Report of the Royal Society and the British Academy , British Academy & Royal Society 2017 available online (see Report).
  • Cai, Li and Yangyong Zhu, 2015, “The Challenges of Data Quality and Data Quality Assessment in the Big Data Era”, Data Science Journal , 14: 2. doi:10.5334/dsj-2015-002
  • Callebaut, Werner, 2012, “Scientific Perspectivism: A Philosopher of Science’s Response to the Challenge of Big Data Biology”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 69–80. doi:10.1016/j.shpsc.2011.10.007
  • Calude, Cristian S. and Giuseppe Longo, 2017, “The Deluge of Spurious Correlations in Big Data”, Foundations of Science , 22(3): 595–612. doi:10.1007/s10699-016-9489-4
  • Canali, Stefano, 2016, “Big Data, Epistemology and Causality: Knowledge in and Knowledge out in EXPOsOMICS”, Big Data & Society , 3(2): 205395171666953. doi:10.1177/2053951716669530
  • –––, 2019, “Evaluating Evidential Pluralism in Epidemiology: Mechanistic Evidence in Exposome Research”, History and Philosophy of the Life Sciences , 41(1): art. 4. doi:10.1007/s40656-019-0241-6
  • Cartwright, Nancy D., 2013, Evidence: For Policy and Wheresoever Rigor Is a Must , London School of Economics and Political Science (LSE), Order Project Discussion Paper Series [Cartwright 2013 available online ].
  • –––, 2019, Nature, the Artful Modeler: Lectures on Laws, Science, How Nature Arranges the World and How We Can Arrange It Better (The Paul Carus Lectures) , Chicago, IL: Open Court.
  • Chang, Hasok, 2012, Is Water H2O? Evidence, Realism and Pluralism , (Boston Studies in the Philosophy of Science 293), Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-3932-1
  • –––, 2017, “VI—Operational Coherence as the Source of Truth”, Proceedings of the Aristotelian Society , 117(2): 103–122. doi:10.1093/arisoc/aox004
  • Chapman, Robert and Alison Wylie, 2016, Evidential Reasoning in Archaeology , London: Bloomsbury Publishing Plc.
  • Collins, Harry M., 1990, Artificial Experts: Social Knowledge and Intelligent Machines , Cambridge, MA: MIT Press.
  • Craver, Carl F. and Lindley Darden, 2013, In Search of Mechanisms: Discoveries Across the Life Sciences , Chicago: University of Chicago Press.
  • Daston, Lorraine, 2017, Science in the Archives: Pasts, Presents, Futures , Chicago: University of Chicago Press.
  • De Regt, Henk W., 2017, Understanding Scientific Understanding , Oxford: Oxford University Press. doi:10.1093/oso/9780190652913.001.0001
  • D’Ignazio, Catherine and Klein, Lauren F., 2020, Data Feminism , Cambridge, MA: The MIT Press.
  • Douglas, Heather E., 2009, Science, Policy and the Value-Free Ideal , Pittsburgh, PA: University of Pittsburgh Press.
  • Dreyfus, Hubert L., 1992, What Computers Still Can’t Do: A Critique of Artificial Reason , Cambridge, MA: MIT Press.
  • Durán, Juan M. and Nico Formanek, 2018, “Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism”, Minds and Machines , 28(4): 645–666. doi:10.1007/s11023-018-9481-6
  • Edwards, Paul N., 2010, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming , Cambridge, MA: The MIT Press.
  • Elliott, Kevin C., 2012, “Epistemic and methodological iteration in scientific research”. Studies in History and Philosophy of Science , 43: 376–382.
  • Elliott, Kevin C., Kendra S. Cheruvelil, Georgina M. Montgomery, and Patricia A. Soranno, 2016, “Conceptions of Good Science in Our Data-Rich World”, BioScience , 66(10): 880–889. doi:10.1093/biosci/biw115
  • Feest, Uljana, 2011, “What Exactly Is Stabilized When Phenomena Are Stabilized?”, Synthese , 182(1): 57–71. doi:10.1007/s11229-009-9616-7
  • Fleming, Lora, Niccolò Tempini, Harriet Gordon-Brown, Gordon L. Nichols, Christophe Sarran, Paolo Vineis, Giovanni Leonardi, Brian Golding, Andy Haines, Anthony Kessel, Virginia Murray, Michael Depledge, and Sabina Leonelli, 2017, “Big Data in Environment and Human Health”, in Oxford Research Encyclopedia of Environmental Science , by Lora Fleming, Niccolò Tempini, Harriet Gordon-Brown, Gordon L. Nichols, Christophe Sarran, Paolo Vineis, Giovanni Leonardi, Brian Golding, Andy Haines, Anthony Kessel, Virginia Murray, Michael Depledge, and Sabina Leonelli, Oxford: Oxford University Press. doi:10.1093/acrefore/9780199389414.013.541
  • Floridi, Luciano, 2014, The Fourth Revolution: How the Infosphere is Reshaping Human Reality , Oxford: Oxford University Press.
  • Floridi, Luciano and Phyllis Illari (eds.), 2014, The Philosophy of Information Quality , (Synthese Library 358), Cham: Springer International Publishing. doi:10.1007/978-3-319-07121-3
  • Frigg, Roman and Julian Reiss, 2009, “The Philosophy of Simulation: Hot New Issues or Same Old Stew?”, Synthese , 169(3): 593–613. doi:10.1007/s11229-008-9438-z
  • Frigg, Roman and Stephan Hartmann, 2016, “Models in Science”, in The Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/models-science/ >.
  • Gooding, David C., 1990, Experiment and the Making of Meaning , Dordrecht & Boston: Kluwer.
  • Giere, Ronald, 2006, Scientific Perspectivism , Chicago: University of Chicago Press.
  • Griesemer, James R., forthcoming, “A Data Journey through Dataset-Centric Population Biology”, in Leonelli and Tempini forthcoming.
  • Hacking, Ian, 1992, “The Self-Vindication of the Laboratory Sciences”, In Science as Practice and Culture , Andrew Pickering (ed.), Chicago, IL: The University of Chicago Press, 29–64.
  • Harris, Todd, 2003, “Data Models and the Acquisition and Manipulation of Data”, Philosophy of Science , 70(5): 1508–1517. doi:10.1086/377426
  • Hey Tony, Stewart Tansley, and Kristin Tolle, 2009, The Fourth Paradigm. Data-Intensive Scientific Discovery , Redmond, WA: Microsoft Research.
  • Humphreys, Paul, 2004, Extending Ourselves: Computational Science, Empiricism, and Scientific Method , Oxford: Oxford University Press. doi:10.1093/0195158709.001.0001
  • –––, 2009, “The Philosophical Novelty of Computer Simulation Methods”, Synthese , 169(3): 615–626. doi:10.1007/s11229-008-9435-2
  • Karaca, Koray, 2018, “Lessons from the Large Hadron Collider for Model-Based Experimentation: The Concept of a Model of Data Acquisition and the Scope of the Hierarchy of Models”, Synthese , 195(12): 5431–5452. doi:10.1007/s11229-017-1453-5
  • Kelly, Thomas, 2016, “Evidence”, in The Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/evidence/ >.
  • Kitchin, Rob, 2013, The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences , Los Angeles: Sage.
  • –––, 2014, “Big Data, new epistemologies and paradigm shifts”, Big Data and Society , 1(1) April-June. doi: 10.1177/2053951714528481
  • Kitchin, Rob and Gavin McArdle, 2016, “What Makes Big Data, Big Data? Exploring the Ontological Characteristics of 26 Datasets”, Big Data & Society , 3(1): 205395171663113. doi:10.1177/2053951716631130
  • Krohs, Ulrich, 2012, “Convenience Experimentation”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 52–57. doi:10.1016/j.shpsc.2011.10.005
  • Lagoze, Carl, 2014, “Big Data, data integrity, and the fracturing of the control zone,” Big Data and Society , 1(2) July-December. doi: 10.1177/2053951714558281
  • Leonelli, Sabina, 2014, “What Difference Does Quantity Make? On the Epistemology of Big Data in Biology”, Big Data & Society , 1(1): 205395171453439. doi:10.1177/2053951714534395
  • –––, 2016, Data-Centric Biology: A Philosophical Study , Chicago: University of Chicago Press.
  • –––, 2017, “Global Data Quality Assessment and the Situated Nature of ‘Best’ Research Practices in Biology”, Data Science Journal , 16: 32. doi:10.5334/dsj-2017-032
  • –––, 2018, “The Time of Data: Timescales of Data Use in the Life Sciences”, Philosophy of Science , 85(5): 741–754. doi:10.1086/699699
  • –––, 2019a, La Recherche Scientifique à l’Ère des Big Data: Cinq Façons Donc les Données Massive Nuisent à la Science, et Comment la Sauver , Milano: Éditions Mimésis.
  • –––, 2019b, “What Distinguishes Data from Models?”, European Journal for Philosophy of Science , 9(2): 22. doi:10.1007/s13194-018-0246-0
  • Leonelli, Sabina and Niccolò Tempini, 2018, “Where Health and Environment Meet: The Use of Invariant Parameters in Big Data Analysis”, Synthese , special issue on the Philosophy of Epidemiology , Sean Valles and Jonathan Kaplan (eds.). doi:10.1007/s11229-018-1844-2
  • –––, forthcoming, Data Journeys in the Sciences , Cham: Springer International Publishing.
  • Loettgers, Andrea, 2009, “Synthetic Biology and the Emergence of a Dual Meaning of Noise”, Biological Theory , 4(4): 340–356. doi:10.1162/BIOT_a_00009
  • Longino, Helen E., 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry , Princeton, NJ: Princeton University Press.
  • Lowrie, Ian, 2017, “Algorithmic Rationality: Epistemology and Efficiency in the Data Sciences”, Big Data & Society , 4(1): 1–13. doi:10.1177/2053951717700925
  • MacLeod, Miles and Nancy J. Nersessian, 2013, “Building Simulations from the Ground Up: Modeling and Theory in Systems Biology”, Philosophy of Science , 80(4): 533–556. doi:10.1086/673209
  • Massimi, Michela, 2011, “From Data to Phenomena: A Kantian Stance”, Synthese , 182(1): 101–116. doi:10.1007/s11229-009-9611-z
  • –––, 2012, “ Scientific perspectivism and its foes”, Philosophica , 84: 25–52.
  • –––, 2016, “Three Tales of Scientific Success”, Philosophy of Science , 83(5): 757–767. doi:10.1086/687861
  • Mayer-Schönberger, Victor and Kenneth Cukier, 2013, Big Data: A Revolution that Will Transform How We Live, Work, and Think , New York: Eamon Dolan/Houghton Mifflin Harcourt.
  • Mayo, Deborah G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • Mayo, Deborah G. and Aris Spanos (eds.), 2009a, Error and Inference , Cambridge: Cambridge University Press.
  • Mayo, Deborah G. and Aris Spanos, 2009b, “Introduction and Background”, in Mayo and Spanos (eds.) 2009a, pp. 1–27.
  • McAllister, James W., 1997, “Phenomena and Patterns in Data Sets”, Erkenntnis , 47(2): 217–228. doi:10.1023/A:1005387021520
  • –––, 2007, “Model Selection and the Multiplicity of Patterns in Empirical Data”, Philosophy of Science , 74(5): 884–894. doi:10.1086/525630
  • –––, 2011, “What Do Patterns in Empirical Data Tell Us about the Structure of the World?”, Synthese , 182(1): 73–87. doi:10.1007/s11229-009-9613-x
  • McQuillan, Dan, 2018, “Data Science as Machinic Neoplatonism”, Philosophy & Technology , 31(2): 253–272. doi:10.1007/s13347-017-0273-3
  • Mitchell, Sandra D., 2003, Biological Complexity and Integrative Pluralism , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511802683
  • Morgan, Mary S., 2005, “Experiments versus Models: New Phenomena, Inference and Surprise”, Journal of Economic Methodology , 12(2): 317–329. doi:10.1080/13501780500086313
  • –––, forthcoming, “The Datum in Context”, in Leonelli and Tempini forthcoming.
  • Morrison, Margaret, 2015, Reconstructing Reality: Models, Mathematics, and Simulations , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199380275.001.0001
  • Müller-Wille, Staffan and Isabelle Charmantier, 2012, “Natural History and Information Overload: The Case of Linnaeus”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 4–15. doi:10.1016/j.shpsc.2011.10.021
  • Napoletani, Domenico, Marco Panza, and Daniele C. Struppa, 2011, “Agnostic Science. Towards a Philosophy of Data Analysis”, Foundations of Science , 16(1): 1–20. doi:10.1007/s10699-010-9186-7
  • –––, 2014, “Is Big Data Enough? A Reflection on the Changing Role of Mathematics in Applications”, Notices of the American Mathematical Society , 61(5): 485–490. doi:10.1090/noti1102
  • Nickles, Thomas, forthcoming, “Alien Reasoning: Is a Major Change in Scientific Research Underway?”, Topoi , first online: 20 March 2018. doi:10.1007/s11245-018-9557-1
  • Norton, John D., 2003, “A Material Theory of Induction”, Philosophy of Science , 70(4): 647–670. doi:10.1086/378858
  • O’Malley M, Maureen A., Kevin C. Elliott, Chris Haufe, and Richard Burian, 2009. “Philosophies of funding”. Cell , 138: 611–615. doi: 10.1016/j.cell.2009.08.008
  • O’Malley, Maureen A. and Orkun S. Soyer, 2012, “The Roles of Integration in Molecular Systems Biology”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 58–68. doi:10.1016/j.shpsc.2011.10.006
  • O’Neill, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , New York: Crown.
  • Parker, Wendy S., 2009, “Does Matter Really Matter? Computer Simulations, Experiments, and Materiality”, Synthese , 169(3): 483–496. doi:10.1007/s11229-008-9434-3
  • –––, 2017, “Computer Simulation, Measurement, and Data Assimilation”, The British Journal for the Philosophy of Science , 68(1): 273–304. doi:10.1093/bjps/axv037
  • Pasquale, Frank, 2015, The Black Box Society: The Secret Algorithms That Control Money and Information , Cambridge, MA: Harvard University Press.
  • Pietsch, Wolfgang, 2015, “Aspects of Theory-Ladenness in Data-Intensive Science”, Philosophy of Science , 82(5): 905–916. doi:10.1086/683328
  • –––, 2016, “The Causal Nature of Modeling with Big Data”, Philosophy & Technology , 29(2): 137–171. doi:10.1007/s13347-015-0202-2
  • –––, 2017, “Causation, probability and all that: Data science as a novel inductive paradigm”, in Frontiers in Data Science , Matthias Dehmer and Frank Emmert-Streib (eds.), Boca Raton, FL: CRC, 329–353.
  • Porter, Theodore M., 1995, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life , Princeton, NJ: Princeton University Press.
  • Porter, Theodore M. and Soraya de Chadarevian, 2018, “Introduction: Scrutinizing the Data World”, Historical Studies in the Natural Sciences , 48(5): 549–556. doi:10.1525/hsns.2018.48.5.549
  • Prainsack, Barbara and Buyx, Alena, 2017, Solidarity in Biomedicine and Beyond , Cambridge, UK: Cambridge University Press.
  • Radder, Hans, 2009, “The Philosophy of Scientific Experimentation: A Review”, Automated Experimentation , 1(1): 2. doi:10.1186/1759-4499-1-2
  • Ratti, Emanuele, 2015, “Big Data Biology: Between Eliminative Inferences and Exploratory Experiments”, Philosophy of Science , 82(2): 198–218. doi:10.1086/680332
  • Reichenbach, Hans, 1938, Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge , Chicago, IL: The University of Chicago Press.
  • Reiss, Julian, 2015, “A Pragmatist Theory of Evidence”, Philosophy of Science , 82(3): 341–362. doi:10.1086/681643
  • Reiss, Julian, 2015, Causation, Evidence, and Inference , New York: Routledge.
  • Rescher, Nicholas, 1984, The Limits of Science , Berkely, CA: University of California Press.
  • Rheinberger, Hans-Jörg, 2011, “Infra-Experimentality: From Traces to Data, from Data to Patterning Facts”, History of Science , 49(3): 337–348. doi:10.1177/007327531104900306
  • Romeijn, Jan-Willem, 2017, “Philosophy of Statistics”, in The Stanford Encyclopedia of Philosophy (Spring 2017), Edward N. Zalta (ed.), URL: https://plato.stanford.edu/archives/spr2017/entries/statistics/ .
  • Sepkoski, David, 2013, “Toward ‘a natural history of data’: Evolving practices and epistemologies of data in paleontology, 1800–2000”, Journal of the History of Biology , 46: 401–444.
  • Shavit, Ayelet and James Griesemer, 2009, “There and Back Again, or the Problem of Locality in Biodiversity Surveys*”, Philosophy of Science , 76(3): 273–294. doi:10.1086/649805
  • Srnicek, Nick, 2017, Platform capitalism , Cambridge, UK and Malden, MA: Polity Press.
  • Sterner, Beckett, 2014, “The Practical Value of Biological Information for Research”, Philosophy of Science , 81(2): 175–194. doi:10.1086/675679
  • Sterner, Beckett and Nico M. Franz, 2017, “Taxonomy for Humans or Computers? Cognitive Pragmatics for Big Data”, Biological Theory , 12(2): 99–111. doi:10.1007/s13752-017-0259-5
  • Sterner, Beckett W., Nico M. Franz, and J. Witteveen, 2020, “Coordinating dissent as an alternative to consensus classification: insights from systematics for bio-ontologies”, History and Philosophy of the Life Sciences , 42(1): 8. doi: 10.1007/s40656-020-0300-z
  • Stevens, Hallam, 2016, “Hadooping the Genome: The Impact of Big Data Tools on Biology”, BioSocieties , 11: 352–371.
  • Strasser, Bruno, 2019, Collecting Experiments: Making Big Data Biology , Chicago: University of Chicago Press.
  • Suppes, Patrick, 1962, “Models of data”, in Logic, Methodology and Philosophy of Science , Ernest Nagel, Patrick Suppes, & Alfred Tarski (eds.), Stanford: Stanford University Press, 252–261.
  • Symons, John and Ramón Alvarado, 2016, “Can We Trust Big Data? Applying Philosophy of Science to Software”, Big Data & Society , 3(2): 1-17. doi:10.1177/2053951716664747
  • Symons, John and Jack Horner, 2014, “Software Intensive Science”, Philosophy & Technology , 27(3): 461–477. doi:10.1007/s13347-014-0163-x
  • Tempini, Niccolò, 2017, “Till Data Do Us Part: Understanding Data-Based Value Creation in Data-Intensive Infrastructures”, Information and Organization , 27(4): 191–210. doi:10.1016/j.infoandorg.2017.08.001
  • Tempini, Niccolò and Sabina Leonelli, 2018, “Concealment and Discovery: The Role of Information Security in Biomedical Data Re-Use”, Social Studies of Science , 48(5): 663–690. doi:10.1177/0306312718804875
  • Toulmin, Stephen, 1958, The Uses of Arguments , Cambridge: Cambridge University Press.
  • Turner, Raymond and Nicola Angius, 2019, “The Philosophy of Computer Science”, in The Stanford Encyclopedia of Philosophy (Spring 2019 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/spr2019/entries/computer-science/ >.
  • Van Fraassen, Bas C., 2008, Scientific Representation: Paradoxes of Perspective , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199278220.001.0001
  • Waters, C. Kenneth, 2007, “The Nature and Context of Exploratory Experimentation: An Introduction to Three Case Studies of Exploratory Research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Wilkinson, Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E. Bourne, Jildau Bouwman, Anthony J. Brookes, Tim Clark, Mercè Crosas, Ingrid Dillo, Olivier Dumon, Scott Edmunds, Chris T. Evelo, Richard Finkers, Alejandra Gonzalez-Beltran, et al., 2016, “The FAIR Guiding Principles for Scientific Data Management and Stewardship”, Scientific Data , 3(1): 160018. doi:10.1038/sdata.2016.18
  • Williamson, Jon, 2004 “A dynamic interaction between machine learning and the philosophy of science”, Minds and Machines , 14(4): 539–54. doi:10.1093/bjps/axx012
  • Wimsatt, William C., 2007, Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality , Cambridge, MA: Harvard University Press.
  • Winsberg, Eric, 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Woodward, James, 2000, “Data, phenomena and reliability”, Philosophy of Science , 67(supplement): Proceedings of the 1998 Biennial Meetings of the Philosophy of Science Association. Part II: Symposia Papers (Sep., 2000), pp. S163–S179. https://www.jstor.org/stable/188666
  • –––, 2010, “Data, Phenomena, Signal, and Noise”, Philosophy of Science , 77(5): 792–803. doi:10.1086/656554
  • Wright, Jessey, 2017, “The Analysis of Data and the Evidential Scope of Neuroimaging Results”, The British Journal for the Philosophy of Science , 69(4): 1179–1203. doi:10.1093/bjps/axx012
  • Wylie, Alison, 2017, “How Archaeological Evidence Bites Back: Strategies for Putting Old Data to Work in New Ways”, Science, Technology, & Human Values , 42(2): 203–225. doi:10.1177/0162243916671200
  • –––, forthcoming, “Radiocarbon Dating in Archaeology: Triangulation and Traceability”, in Leonelli and Tempini forthcoming.
  • Zuboff, Shoshana, 2017, The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

artificial intelligence | Bacon, Francis | biology: experiment in | computer science, philosophy of | empiricism: logical | evidence | human genome project | models in science | Popper, Karl | science: theory and observation in | scientific explanation | scientific method | scientific theories: structure of | statistics, philosophy of

Acknowledgments

The research underpinning this entry was funded by the European Research Council (grant award 335925) and the Alan Turing Institute (EPSRC Grant EP/N510129/1).

Copyright © 2020 by Sabina Leonelli < s . leonelli @ exeter . ac . uk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

What is your definition of Big Data? Researchers’ understanding of the phenomenon of the decade

Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

ORCID logo

Roles Conceptualization, Formal analysis, Supervision, Validation, Writing – review & editing

Roles Data curation, Formal analysis, Validation

Roles Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing – review & editing

  • Maddalena Favaretto, 
  • Eva De Clercq, 
  • Christophe Olivier Schneble, 
  • Bernice Simone Elger

PLOS

  • Published: February 25, 2020
  • https://doi.org/10.1371/journal.pone.0228987
  • Reader Comments

Table 1

The term Big Data is commonly used to describe a range of different concepts: from the collection and aggregation of vast amounts of data, to a plethora of advanced digital techniques designed to reveal patterns related to human behavior. In spite of its widespread use, the term is still loaded with conceptual vagueness. The aim of this study is to examine the understanding of the meaning of Big Data from the perspectives of researchers in the fields of psychology and sociology in order to examine whether researchers consider currently existing definitions to be adequate and investigate if a standard discipline centric definition is possible.

Thirty-nine interviews were performed with Swiss and American researchers involved in Big Data research in relevant fields. The interviews were analyzed using thematic coding.

No univocal definition of Big Data was found among the respondents and many participants admitted uncertainty towards giving a definition of Big Data. A few participants described Big Data with the traditional “Vs” definition—although they could not agree on the number of Vs. However, most of the researchers preferred a more practical definition, linking it to processes such as data collection and data processing.

The study identified an overall uncertainty or uneasiness among researchers towards the use of the term Big Data which might derive from the tendency to recognize Big Data as a shifting and evolving cultural phenomenon. Moreover, the currently enacted use of the term as a hyped-up buzzword might further aggravate the conceptual vagueness of Big Data.

Citation: Favaretto M, De Clercq E, Schneble CO, Elger BS (2020) What is your definition of Big Data? Researchers’ understanding of the phenomenon of the decade. PLoS ONE 15(2): e0228987. https://doi.org/10.1371/journal.pone.0228987

Editor: Florian Fischer, Bielefeld University, GERMANY

Received: October 10, 2019; Accepted: January 16, 2020; Published: February 25, 2020

Copyright: © 2020 Favaretto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The raw data (audio-files) and the transcripts related to the project cannot be openly released due to ethical constraints (such as easy re-identification of the participants and the sensitive nature of parts of the interviews). The data will be kept for three years after the finalization of the study (in accordance with the ethics application). The data is stored on a safe server to which only the team members of the NRP 75 Project Elger have access to. The university servers are backed up regularly by the IT service of the University of Basel to ensure quality preservation of the data for the stipulated three years after the finalization of the study. Although the authors cannot make their study’s data publicly available at the time of publication, all authors commit to make the data underlying the findings described in this study fully available without restriction to those who request the data, in compliance with the PLOS Data Availability policy. For data sets involving personally identifiable information or other sensitive data, data sharing is contingent on the data being handled appropriately by the data requester and in accordance with all applicable local requirements. We therefore commit to provide sections of our data which form the basis of analysis in this particular manuscript whenever requested. Upon request, a data sharing agreement will be stipulated between the Institute for Biomedical Ethics and the one requesting the data that will state that: 1) The shared data must be deleted by the end of 2023 as stipulated in the recruitment email sent to the study participants designed in accordance to the project proposal of the NRP 75 sent to the Ethics Committee northwest/central Switzerland (EKNZ); 2) The people requesting the data agree to ensure its confidentiality, they should not attempt to re-identify the participants and the data should not be shared with any further third stakeholder not involved in the data sharing agreement signed between the Institute for Biomedical Ethics and those requesting the data; 3) The data will be shared only after the Institute for Biomedical Ethics has received specific written consent for data sharing from the study participants. The authors are the point of contact for fielding data access requests as they are the ones that collected the data and that have access to the safe server where the data is stored. The main points of contact for this manuscript are the Head of the Institute for Biomedical Ethics (Bernice Elger: [email protected] ) and the corresponding author (Maddalena Favaretto: [email protected] ).

Funding: The funding for this study was provided by the Swiss National Science Foundation in the framework of the National Research Program “Big Data”, NRP 75 (Grant-No: 407540_167211, recipient: Prof. Bernice Simone Elger). We confirm that the Swiss National Science Foundation had no involvement in the study design, collection, analysis, and interpretation of data, the writing of the manuscript and the decision to submit the paper for publication.

Competing interests: The authors declare that they have no competing interests.

Abbreviations: CH, Switzerland; EC, Cantonal Review Board; EHR, electronic health record; EU, European Union; HRA, Human Research Act; IRB, Institutional Review Board; NIH, National Institute of Health; NSF, National Science Foundation; US, United States

Introduction

“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it …” @Dan Ariely, 2013

Big Data is a term that has invaded our daily world. From commercial applications to research in multiple fields, Big Data holds the promise of solving some of the world’s most challenging problems. Also within academics, Big Data is popular in most disciplines, from the social sciences [ 1 ], to psychology [ 2 ], geography [ 3 ], humanities (now also called digital humanities [ 4 ]), and healthcare [ 5 ].

The possibility of using increasingly big datasets that have the potential to reveal patterns of individual and group behavior together with the promising beneficial application of data analytics [ 6 ] have attracted many researchers. Examples include the development of smarter hospitals where predictive analysis of Electronic Health Records (EHR) can identify in real time patients at higher risks for health deterioration or cardiac arrest [ 7 ], and the design of smarter cities projects that involve the use of aggregated data from social media, GPS, radio frequencies and consumer data to improve various sectors of urban living such as transportation, education and energy [ 8 ].

Hence, Big Data has become a frequently utilized term in the academic environment as a novel and sophisticated apparatus for research. But this raises the important question: what exactly is meant with “Big Data”?

This study aims to explore how researchers working with state of the art digital research projects in psychology and social sciences understand the term Big Data, in order to a) explore the main characteristics that researchers attribute to Big Data; b) examine whether researchers consider currently existing definitions of Big Data to be adequate; c) investigate if an overarching and straightforward discipline centric definition of Big Data in psychological and sociological research is actually possible and desirable.

The term Big Data is not a recent one. Although Diebold admits that it “probably originated in the lunch-table conversations at Silicon Graphics in the mid-1990s” [ 9 ], its first appearance in the academic literature dates back to the early 2000 in statistics and econometrics, where Big Data was used to describe “the explosion in the quantity (and sometimes, quality) of available and potentially relevant data, largely the result of recent and unprecedented advancements in data recording and storage technology” [ 10 ]. Attributed characteristics of Big Data were: volume (huge amounts), velocity (high-speed processing) and variety (heterogeneous data), the so-called 3Vs of Big Data [ 11 ].

In the following years, as larger quantities of data became readily available, additional definitions of Big Data were developed, that expanded on the traditional three attributes [ 12 ]: from additional Vs such as veracity [ 13 ], value [ 14 ] and variability [ 15 ] to other qualities including exhaustivity [ 16 ], extensionality [ 17 ], and complexity [ 18 ].

Despite their differences, these definitions all highlight that Big Data consists in large amounts of data coming from different sources. The European Commission defines Big Data as:

large amounts of different types of data produced from various types of sources, such as people, machines or sensors. This data includes climate information, satellite imagery, digital pictures and videos, transition records or GPS signals. Big Data may involve personal data: that is, any information relating to an individual, and can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address [ 19 ].

Similarly, in the United States, the National Science Foundation (NSF) refers to Big Data as:

large, diverse, complex, longitudinal, and/or distributed data sets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future (NSF-12-499) [ 20 ],
data that challenge existing methods due to size, complexity, or rate of availability (NSF-14-543) [ 21 ].

Despite the consensual focal point of these definitions, Big Data continues to be surrounded with conceptual vagueness due to the heterogeneous ways in which the term is used in various contexts [ 22 ]. To solve this issue, scholars have tried to propose a standard or mutually agreed upon definition of Big Data. For example De Mauro and colleagues proposed a consensual formal definition where Big Data “represents the Information assets characterized by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value” [ 22 ]. In the biomedical context, Baro et al. [ 23 ] define it exclusively by its volume and propose a threshold to over which a dataset qualifies as Big Data.

Other scholars, like Floridi for example, have criticized these traditional “attributes” definitions because they are vague and obscure and do not clarify what the term Big Data exactly means or refers to [ 24 ]. Some scholars within the social sciences have suggested to discard the “V features” definitions altogether as these attributes predominantly come from data science and data analytics and are considered too technical. Among them, one has proposed to replace them with 13 “P features” such as portentous , perverse , personal , political , predictive , etc. [ 25 ]. Kitchin and McArdle, argue that V-words and P-words “are often descriptive of a broad set of issues associated with Big Data, rather than characterizing the ontological traits of data themselves” [ 26 ]. The authors also claim that volume and variety are not key characteristics of Big Data—only velocity and exhaustivity are—and that the V definition is somewhat false and misleading as there are multiple forms of Big Data that do not share all the same characteristics. Moreover, it has also been argued that, as computational capacities of systems are exponentially increasing with time, it would be “impractical to define a specific threshold for Big Data volumes, because they are relative and they vary by factors, such as time and the type of data” [ 27 ], leaving the threshold to be a non-definitive and suggestive measure that is not suitable for a coherent definition.

So despite scholarly effort to narrow down the debate on the definition of Big Data and despite the existence of definitions employed by policymaking and academic bodies, such as the aforementioned definitions from the European Commission and the NSF, there is still no consensus in the literature on a proper definition of Big Data. Moreover, it is unclear to what extent academic researchers working in disciplines that embrace Big Data as a research methodology are aware of and agree with these existing definitions.

The definition of Big Data is an important topic given that Institutional Review Boards (IRBs) and regulatory bodies worldwide are struggling to regulate Big Data research and research projects involving Big Data methods and analytics. The use of growing amounts of personal data and the lack of appropriate guidelines and laws in fact raise important ethical issues [ 28 , 29 ]. In psychology and sociology in particular, privacy concerns are particularly pressing. For instance the literature has highlighted the issues of linking different digital datasets that on the one hand might lead to valuable research insights but on the other reveal sensitive information about research participants [ 30 ]; some scholars have underlined the intrinsic tension between ensuring anonymity of research participants and the quality of the data set especially in light of increasingly applied policies for open data sources in academic research [ 31 ]; others have questioned the acceptability of using data from digital spaces (for instance social media) for research purposes without the subjects’ explicit consent or awareness [ 32 ]. Scandals such as Cambridge Analytica [ 33 ] and the Facebook Emotional Contagion Experiment [ 34 ] have put under the spotlight how poorly regulated research practices might jeopardize public perception of research. Public outrage that followed such scandals has led towards the development of strategies to protect both private users and research participants, both in industry and academic contexts [ 35 ]. However, researchers are still pointing to the lack of support from regulatory bodies when it comes to evaluating increasingly computational research proposals [ 36 , 37 ].

As long as definitions are unclear, laws, regulations and guidelines that are bound to govern Big Data research in these two fields of research are unlikely to be effective, especially if researchers are unaware of the regulatory framework or refrain from defining their research as Big Data research out of fear for regulatory restrictions as it happened with the buzzword “nano” when referring to nanotechnology [ 38 ].

Furthermore, we should not forget that the growing datafication and digitalization of society requires researchers to work together in multidisciplinary teams in order to address the technical, ethical and legal challenges that Big Data research poses [ 39 ]. As communication challenges might arise in collective networks and among different stakeholders if each has their own definition or understanding of the discussed technology, like it happened in other scientific fields [ 38 ], the lack of a shared definition of Big Data might aggravate multidisciplinary communications. For instance if a researcher in the social sciences does not recognize that they are working with Big Data, as they have a particular definition in mind, they might be less likely to promptly and spontaneously approach expert researchers in the field of data protection and data ethics to plan improved strategies for the protection of research subjects that are in line with the standards asked by the specific privacy issues embedded in Big Data research.

For this purpose, we have conducted interviews with researchers from high standing universities both in Switzerland, and the United States. The present study offers an important contribution to the existing literature since it is one of the first studies to examine the opinions of academic researchers on the definition of Big Data in the fields of sociology and psychology.

The data for this manuscript was collected as part of a larger research project on the ethics of Big Data research. The aim of the overall project was to investigate the ethical and regulatory challenges of Big data academic research in the fields of psychology and sociology in Switzerland. These two disciplines were selected not only because they are at the forefront of using Big Data methodologies in projects that involve human research subjects both directly and indirectly [ 40 ] but also because they are among the most under regulated research fields [ 28 , 34 ]. This is especially true for Switzerland, the home country of the project, where Big Data research is challenging the current regulatory framework of academic research projects such as the Federal Act of Data Protection [ 41 ] and the Human Research Act [ 42 ].

We conducted 39 semi-structured interviews– 20 in Switzerland (CH) and 19 in the United States (US)–with researchers (professors, senior researchers, or postdocs) involved in research projects using Big Data methodologies in the field of psychology and sociology.

The United States were chosen as a comparative sample country where advanced Big Data research is taking place in the academic context. This instance is supported by the numerous grants that federal institutions, such as the NSF and the National Institute of Health (NIH) have been placing for Big Data research projects for several years [ 20 , 21 , 43 ].

Participants were selected based on their involvement in Big Data research. For this purpose, we compiled a list of keywords linked to Big Data. The list was compiled by two of the authors while performing a systematic review on Big Data that assisted the identification of the main terms related to Big Data research and technology [ 44 ]. The first author then systematically browsed the professional pages of all professors affiliated to the departments of psychology and sociology of all twelve Swiss Universities (ten Universities and two Federal Institutes of Technology) and the top ten US Universities according to the Times Higher Education University Ranking 2018 (accessed on 13.12.2018) and selected those that had these specific keywords appearing in their personal page (See Table 1 ):

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0228987.t001

For Switzerland the selection was carried out throughout January/February 2018 and for the US during January/February 2019. Other participants were identified through snowballing. Selection of the sample both through systematic selection and snowballing identified a consistent number of data scientists working on research projects involving data from human subjects in sociology, psychology and similar fields (political science, behavioral science, neuropsychology). They were therefore included in the sample as their profile matched the selection criteria. As this is not a representative sample, since it includes participants only related to the fields of psychology and sociology, we do not seek to generalize from the findings. Instead we are trying to raise awareness about the possible challenges that the use of the term Big Data is generating for research practices internationally.

A total of 194 interview invitations– 50 for Switzerland and 144 for the US—were sent via email. They contained information on the purpose of the study, participant rights, and the significance of the study. If no reply was received, a reminder was sent a week after the first invitation email. A 40% positive response rate for Switzerland and a 13.2% positive response rate for the US was obtained. We reached a sample size of 39 researchers. Regarding saturation, we define it as the point in the analysis where no new codes or themes emerge from the analysis, but only mounting instances of the same codes [ 45 , 46 ]. Our interviews stopped producing new codes after analyzing the seventeenth interview of the Swiss sample and the fifteenth for the US sample, thus reaching saturation. The analysis was carried out until the end of the sample.

Data collection

Interviews were carried out by the first and third author between January 2018 and August 2019. At the time of the interviews, the two authors were doctoral students with respectively a background in philosophy and empirical ethics and geography and computer science. Before starting the interviews, both authors were trained on interviewing skills and took formal methodological courses as part of their PhD education. Once the first pilot interviews were completed, both students received constructive feedback on their performance from two senior researchers in order to ensure the high quality of collected data.

Interviews with Swiss researchers were performed at a time and place chosen by the interviewee (usually at their home University) or via telephone, according to the participants’ preference and availability. Interviews with American researchers were carried out via Skype or telephone.

Oral informed consent was sought from all participants prior to the start of the interview and registered upon consent. From an ethical point of view, for minimal risk research involving interviews studies with experts whose data (transcripts or questionnaires) are anonymized, oral consent and active participation are ethically considered sufficient and proportionate. Furthermore, prior to the beginning of the interview phase, we asked for ethics approval to the Ethics Committee northwest/central Switzerland (EKNZ) and we received an exemption letter stating that since in Switzerland interviews with experts (not patients) are outside of the Human Research Act, they do not require ethics committee approval. To make sure that our experts were clearly informed, at the beginning of the discussion the interviewer briefly restated the purpose of the overall study, their role in the project, the confidential nature of the interview and allowed the participants to ask questions.

A semi-structured interview guide was used to conduct the interview, that was built on the experiences of the research team during prior phases of the overall project. The guide was designed through discussion and consensus within the research team after they had the time to gain familiarity with the literature and studies on Big Data research in the fields of the social sciences and psychology, and on the knowledge gained through the conduction of a systematic literature review [ 44 ].

Questions included information about (a) the research projects conducted by the interviewee either prior to or at the time of the interview, (b) the participant’s opinion on the use of social media or commercial data for academic research, (c) the researcher’s attitude towards Big Data research, (d) the participant’s personal understanding of Big Data, (e) perceived ethical, regulatory or technical barriers while conducting the research project, (f) institutional regulatory practices and experiences with Institutional Review Boards (IRBs) or Cantonal Review Boards (ECs)–the latter only for the Swiss participants, (g) the researcher’s opinion on data driven research as opposed to theory driven research. Most of the data presented in this paper comes from the questions related to topics (c) and (d), as they deal with the conceptualization, definition and understanding of Big Data. The other topics will be analyzed elsewhere. Table 2 illustrates the relevant interview questions for this article.

thumbnail

https://doi.org/10.1371/journal.pone.0228987.t002

The interviews lasted between 35–90 minutes. All interviews were performed in English, being the language commonly used in academia, both for Swiss and American participants. Interviews were tape-recorded and subsequently transcribed verbatim to facilitate qualitative analysis. If participants requested, transcripts were returned to them to check the accuracy of the transcription. Only one participant asked for their transcript back and found no inconsistencies.

The transcripts were successively transferred into the qualitative analysis software MaxQDA (Version 2018) to support the analytic process [ 47 ].

Data analysis

Applied thematic analysis was used for data analysis. This method aims at analyzing and reporting thematic elements and patterns within the data in order to organize, describe and interpret the dataset in rich detail [ 48 ]. The transcripts were therefore read in full length and independently analyzed by at least two of the members of the research group. This first step of analysis consisted of open ended coding to explore the thematic elements in the interviews. Later on the members of the team came together to confront the independent open ended coding, discuss and sort the identified themes.

Several major themes were identified from this analysis including: regulation of Big Data research, new emerging challenges, collaboration and interdisciplinary approach in digital studies, the understanding of the term Big Data, and attitudes towards Big Data studies.

Understanding and definition of Big Data were chosen to explore since the participants gave many different interpretations of the term. Subsequently, all interviews were analyzed for units of text that related both to the definition of Big Data or to expressions of attitudes or opinions towards the understanding of the term. The units were then sorted into sub-codes referring to different ways of defining or interpreting the term Big Data. This phase was carried out by the first author and checked for consistency and accuracy by the second author. Through constant discussion and comparison between the two researchers the themes were refined and systematically sorted.

For the study, a total of 39 interviews were performed including 21 sociologists (9 from CH and 12 from the US), 11 psychologists (6 from CH and 5 from the US), and 7 data scientists (5 from CH and 2 from the US). Among them, 34 were professors while 5 were postdocs or senior researchers at the time of the interview.

Of the 39 researchers, 27 explicitly stated that they were working on Big Data research projects or on projects that involve Big Data methodologies. Four participants replied that they were not involved in Big Data research and eight were unsure whether their research could be described as Big Data research (See Table 3 ). A significant difference was found between American and Swiss researchers: among the former, all but one confirmed their affiliation to Big Data research compared to slightly more than half (12 out of 20) of the Swiss respondents. Nevertheless, overall, no significant divergence was found between the two countries with regard to the definition of Big Data. In addition, no considerable dissimilarity was found in the answers based on the research field of the participants, with similar definitions and attitudes equally distributed over psychologists, sociologists and data scientists.

thumbnail

https://doi.org/10.1371/journal.pone.0228987.t003

All, but one, participant gave an answer to the question: how would you define Big Data.

Definitions of Big Data

First, some of our respondents initially admitted of not having a definition.

I don't think anybody really knows but I guess for me I would think that it's…. (P3US-S)
I define it as a …dataset of many features, you know, of …yeah, I don't really…It’s funny, I don't really have a definition. (P13US-P).

A consistent minority of researchers adopted an “essential definition” of Big Data, one based on attributes or properties, while the majority of respondents supported a more “practical definition”, one that is grounded in the practices or processes related to Big Data such as data collection, data source and data processing.

Table 4 illustrates the type of definitions given by our respondents. Some overlaps occur as some participants expressed more than one key definitional trait for Big Data.

thumbnail

https://doi.org/10.1371/journal.pone.0228987.t004

Essential definition based on attributes/properties.

Only a few respondents referred to the traditional “several Vs” definition of Big Data: “We have big volume, we have big velocity, right? We have this kind of three V: Volume, Velocity and Variety” (P29CH-D). Some of them, used these dimensions to illustrate the many technical challenges that Big Data technologies raise.

I like the definition of the several Vs to sum it up. Big Data is simply all those data issues for which you cannot use a standard database. Right so whenever you have a problem with data and it cannot be solved with a relational database than it's a Big Data problem. (P27CH-D)

There was no agreement among the interviewees on the number of dimensions to attribute to Big Data. One respondent acknowledged that it is uncertain how many dimensions are actually attributed to Big Data: “You know, there are always these different Vs, the 3 Vs, the 5 Vs, the 7 Vs, or whatever the 15 Rs. I don't know there's so many definitions…” (P23CH-S).

Some participants chose to describe Big Data by referring to only one of its dimensions. Of these, volume was mentioned most often, with “Big Data as being a big sample size” (P13US-P) or “Huge amounts of data usually from multiple sources” (P14US-P). Some researchers expressed the idea of a sort of undefined threshold which needs to be crossed in order for the Big Data status to be conferred: “I mean one definition is like, it's data that's too big to fit on one hard drive, or too big to be loaded on the RAM of a single machine.” (P17US-P).

However, a couple of respondents pointed out that volume or size alone are not enough to define a dataset as Big Data: “I think of Big Data studies …I realize the term focuses on the size of the dataset but I actually think of it more as the way the data are …how the data come about” (P26CH-S)

While volume was mentioned most frequently, some respondents highlighted other key characteristics such as variety or complexity:

Actually the very big part of practical work with Big Data in our context is what is sometimes referred to the variety characteristic of Big Data. So you have many sources, data comes in all kind of different formats, forms. (P30CH-S)
Data that…complex data that you find out there compared to data that you have collected for a specific observation or experiment or so. (P5US-S)

Finally, one participant circumscribed the definition of Big Data to its overall impact or value on research and society.

Big Data, I think to me it's more related to how big is the impact of that data. I know that is controversial. Like in research you have certain definitions that are different. I feel that's very fluid, you could have tons of data and then this data has almost no impact and the researchers do not call that Big Data. (P21US-S)

Practical definitions.

Most respondents, instead of focusing on the attributes ascribed to Big Data, identified some of the practical processes, such as data collection and data processes, as determinant components for the definition of Big Data.

Source of data . For some participants the source of data was a key factor of the definition. Some spoke for example of digital data coming from technological devices:

[…] but then my internal definition is that …it has to be …it has to draw on some kind of digital data and the analysis has to be digital in some kind of way” (P2US-S)
Well, so Big Data are data that are generated by people when they use different technological devices. (P25CH-P)

The human component of Big Data sources . A consistent number of researchers highlighted the human component and defined Big Data as data generated by people during their daily activities:

What I would probably say more classical Big Data as that when you have like a lot of … people with a lot of data points coming out of …observed situations, so …like computer behavior or like the step counts from your iPhone or the sort of that …that's more the macro perspective perhaps. (P22CH-P)

One researcher directly referred to a specific “official” definition delivered by an academic body:

I go with the definition that is advanced here in the United States by the National Science Foundation, that Big Data is the accumulation, use, assimilation and synthesis of multi-modal, multi-leveled, multiple types of data in real-time so as to allow deep and vast analytics that are both current, retro- as well as prospective. (P11US-P)

Within this context, some participants stated that Big Data offers traces of the real world or mirrors reality because it shows how people spontaneously behave. Others however argued that Big Data only gives a limited and sometimes incorrect representation of reality:

We try to understand the reality. And data is just one aspect of the reality, it does not reflect all reality. A typical example is that people have two phones. And so if you try to estimate the number of people travelling somewhere and you actually calculate the number of phones you need to correct for that. And if you talk to people in machine learning they just don't care about it. For their analysis the universe is the dataset. You see? (P38CH-S)

A couple of researchers downplayed the human component by stating that Big Data is just another data structure, and not necessarily linked to the individuals producing that kind of data:

I've never done a Big Data project that I've did the data collection on. […] So by the time the data gets to me it just looks like data. So yeah, it's Big Data but it's data that I … you know, it's big in that sense and it has a lot of rows, a lot of columns …but it's you know, to me it's you know, it just looks like data. […] So yeah, for me it's just another …another data structure. (P3US-S)

One researcher waned against understanding of Big Data as just “data” and expressed the need for critical reflection in the humanities to safeguard the people behind the dataset:

The data are also about people (…) This is really a fundamental ethical challenge to all of the social sciences and also social science history and the humanistic, digital humanities as well …the challenges for a deep rethinking, not one that refuses these new tools …but really takes on board the fact that this kind of data organizes, potentially reorganizes the entirety of the academic fields, and beyond actually. […] This is a big issue. (P19US-S)

Collection . Another key feature linked to the definition of Big Data were the procedures of data collection, in particular to the absence of purpose or informed consent.

And it’s often the case with Big Data, right? You're often analyzing data that weren't originally generated for the purpose of research and now you want to use it for that purpose. (P4US-P)
In my view Big Data is datasets which are generated from people's behavior without their informed consent. (P9CH-P)

Data processing . A substantial number of respondents mentioned the typology of data analysis procedures as one of the components of the definition of Big Data. Within this view, Big Data was seen as challenging data that necessitate specific algorithmic or computational processes.

I've been defining it in sort of practical terms as data that require, you know that are in such as scale that they require some algorithmic operation on them to reduce the complexity in a format that makes it possible for you to analyze them. (P6US-S)
I would define it data which is hard to handle. Very generally. For the practitioner. (P30CH-S)

Problem-solving tool . Finally, some researchers expressed the opinion that one of the key components of the definition of Big Data is its pragmatic capacity of acting as a tool for answering questions and solving problems in a timely manner:

How easy it is to ask any question to the data that you have available. And … the more …your approach, (…) is a Big Data approach, the easier it is to answer all kinds of questions with your approach. So a good Big Data approach helps you find answers with your own data. (P31CH-D).
Well I guess Big Data is this belief in the possibility of answering old questions or maybe new questions by just … well, by aggregating and then analyzing newly available large data sources. (P28CH-S)

Attitudes towards Big Data

Some of the respondents, also expressed an attitude towards the concept of Big Data either in addition to the definition or as a replacement of it.

The problem of conceptual confusion.

Various respondents pointed to the conceptual unclarity that surrounds the term Big Data.

Especially with regards to the research environment, a couple of researchers attributed this to the various ways in which the notion is used across disciplines:

I think that every discipline would think of it differently so … in ( specific subfield of physics ) we always thought that we work with Big Data in the sense of very large datasets that need to be managed, you know, with a lot of resources. And we have a lot of complexity in that sense, right? The term though, seems to be more often applied to datasets that come from society …come from new tools and applications and instruments and society, that are just collected constantly, right? (laughs) So… it's a little bit different to the way that we were thinking about it from ( specific subfield of physics ) point of view. (…) it [the definition] depends on the context, you might refer to something different… (P5US-S)

Due to this lack of conceptual clarity, a few researchers were reluctant to use the term Big Data: “I think it isn't a useful term because I think it confuses people (P13US-P)”.

Rather than something “useful”, various participants considered Big Data to be a popular buzzword, a cultural product of our life-world rather than a material entity:

This fuzziness is kind of interesting in itself because it kind of says something about the cultural moment we live in where everything potentially can be described, not everything, but many things can be described as Big Data, right? (…) it says some things about how present these new technologies or new ways of analyzing the world are in our daily life. (P2US-S)

On this note, a few researchers highlighted how, especially within academics, Big Data is used to draw attention of funding agencies or research institutes:

There's also like a cynical answer about what Big Data is: whatever gets you funding. (P17US-P)
You see it in different levels, you also see it when you have positions advertised. Because Universities and departments see it as a drawback if they don't have anyone doing kind of Big Data research. Very often new positions advertised will include that we're specifically looking for somebody who's doing this kind of research. How this research is being done …that's not something they're interested in. They just see the need to be part of the hype as it were. (P37CH-S)

One participant believed that the conceptual confusion surrounding the term could be overcome if researchers stopped calling their work “Big Data” and started using specific subcategories (e.g. crowd sourcing, social media etc.).

I think it's important to not look at Big Data as ah "ok, you're working on Big Data". Because it's still like a huge world, that you are working on. So I understand the application is Big Data but it's nice that one goes beyond that. And like for example when talking with people who really work on crowd sourcing or social media, I think it would be really helpful when it comes to this kind of topic. (P29CH-D)

One of the researchers, however believes that compared to the past, the meaning of Big Data is becoming clearer thanks to its increased use both by experts and laypeople. To explain what he meant the participant referred to the philosophical concept of “language games”, developed by Ludwig Wittgenstein, for whom the meaning of a word is conferred by its use within the activity of spoken and written language [ 49 ]:

So like anything else, sort of a "Wittgenstein word game", you know? … as we use the word more, the meaning of the word becomes more apparent and also evolves given the actuality of this use. So, when we started to talk about Big Data ten years ago, twelve years ago, … it was relatively amorphous and there were certain vagaries of what actually constituted a Big Data approach. (P11US-P)

Another participant expressed this increasing understanding of what Big Data is as follows: “I think it's like pornography, you know it when you see it.” (P6US-S)

However, only one researcher expressed the belief that there is consensus among researchers in the way that the term is used and understood.

I think there's becoming more of a general consensus of an operational definition of Big Data as the term is being used more frequently. We understand what Big Data means. I mean I think there are a number sub-definitions that are possible. But I think that an overarching or undergirding definition of Big Data is probably pretty uniform at this point. (P11US-P)

A couple of participants even asserted that Big Data is not a new concept, but that researchers have been dealing with the technical challenges of Big Data for many years:

But the concept of Big Data has been around forever. As I said it depends on your resources. You know, so when you have more information than you have resources that's Big Data. So from the very beginning we've been working on problems with Big Data. (P8US-D)

Still, one of the researchers pointed out that, despite its longevity, Big Data is still a concept that brings novelties that need to be grasped by those working in the field:

But again it's not because they put new names on existing concepts that there is nothing new in what they do, right? (P38CH-S)

Due to the regulatory and multidisciplinary challenges that Big Data is introducing in academic research, there is currently the need to explore the meaning of Big Data to facilitate the development of regulatory frameworks and that of collective research networks. This study aims to contribute to the debate on the definition of Big Data by offering a unique insight into the understanding of and attitudes towards Big Data among American and Swiss based researchers in psychology and sociology. As both Swiss and US research institutions fulfill high internationally recognized standards, we argue that their answers reflect current international discussions in this field.

The study results show that, although there was no consensus among the participants on the interpretation or definition of Big Data, some important overlaps among different definitions could be found. Taking these into consideration, there was substantial agreement among researchers in defining Big Data as huge amounts of digital data produced from technological devices that that necessitate specific algorithmic or computational processes in order to answer relevant research questions.

In spite of this agreement, researchers also reported a high amount of uncertainty and uneasiness in pinning down the term Big Data with an overarching standard definition. In the following discussion we will analyze the adequacy of the different definitions and attitudes given by our respondents in light of the literature and the issues related to ambiguities of the definition of Big Data.

Despite the fact that in the academic literature [ 12 , 14 , 22 , 27 , 50 ] and popular media [ 13 , 18 , 51 ] Big Data is often referred to by the several Vs definition, most of the participants in our sample did not consider this definition to be really adequate as few participants used such a definition.

In addition, even the respondents that did do so, struggled in circumscribing Big Data to a precise number of characteristics either giving a generic answer related to the “several Vs” or mentioning just one specific characteristic. This difficulty to narrow down the attributes of Big Data might come from the fact that, as the phenomenon grew in popularity, an exponentially increasing number of different features were attributed to it–“ versatility , volatility , virtuosity , vitality ” [ 52 ], exhaustivity [ 16 ], extensionality [ 17 ] to quote just a few—leading to confusion regarding to what are the essential characteristics of Big Data.

This may explain why most of the participants preferred a definition that was grounded in practice (e.g. data source, data collection, data processing etc.). Some of these more “practical” definitions were similar to those described in the literature. For instance, the ones that focused on data processing, showing how some of the participants associated the definition of Big Data with the purpose for which the data is used, namely Big Data analytics [ 53 ], are in line with studies that emphasize the computational needs behind the processing of large amounts of data as one of the components of the definition [ 12 , 54 ]. On the other hand, responses that focused on data sources are the ones that are closer to the official definition of the European commission [ 19 ] and the National Science Foundation [ 20 ], that identify Big Data as large amounts of different type of data from different sources—emails, sensors, credit cards etc.

However, only one researcher explicitly referred to a definition of an official body, namely that of the National Science Foundation [ 20 , 21 ].

The wide variety of definitions found among researchers of our sample is probably due to the fact that the term Big Data has not undergone a linear and systemic evolution but has found its meaning as a consequence of its heterogeneous utilizations in different contexts, both academic and industry related [ 22 ].

The existence of several different definitions has led to conceptual uncertainty which in turn has caused some of our respondents to reject the term altogether. This skepticism is reflected in our data as several participants admitted not having an appropriate definition for Big Data or avoided the term as much as possible—although many of them stated that they were involved in Big Data research.

This reluctance to pin down a definition or to use the term Big Data, highlights the implicit need to adopt a more flexible understanding of the concept of Big Data. Some researchers in fact associated Big Data with a socio-culturally evolving concept rather than with a precise fixed entity or referred to the various different disciplines in which the term is currently used. Being a culturally driven buzzword, it might not be in the nature of Big Data to have a standard definition.

Moreover, it is especially thanks to the fact that Big Data is a flexible and cluster concept that it has been able to attract researchers from various disciplines. However, due the lack of a unanimous definition, researchers might have a different understanding of Big Data, thus deteriorating the state of interdisciplinary collaboration. Although this concern was voiced by one of the participants, it was not confirmed by our research results as there were no big differences among the answers of researchers from psychology, sociology and data science with regard to the definition of Big Data. Even though the commonality of responses across the various disciplines might be attributed to the fact that most researchers were from the social sciences and other very similar disciplines, it might highlight a presumed (rather than an actual) incommensurability among disciplines.

However, as policymaking bodies are currently struggling in properly developing guidelines and regulations for Big Data [ 28 , 29 ], the lack of clarity in definitions might aggravate the endeavors of IRBs worldwide as it might become difficult to strategize overarching research guidelines and regulations that could support researchers in conducting their work especially in our field of investigation namely psychology and the social sciences.

As digital technologies are becoming more and more entwined with people’s personal characteristics, their daily actions and future opportunities, Big Data research creates pressing ethical and societal issues such as privacy and data anonymity [ 31 , 55 ], respect for personhood and personal identity [ 56 ], discrimination [ 44 , 57 ], and informed consent [ 58 , 59 ]. It is therefore of the utmost importance that scholars and regulatory bodies are aware of the harm that could be inflicted on research participants and that sustainable regulations are put in place. This might explain why the human component has become one of the main focusses of definitions of Big Data given by policymaking bodies (e.g. EU Commission 2016) [ 19 ] and academic researchers [ 60 ].

A finding that is very relevant for policy making is that many of the researchers in our sample described Big Data as personal data, or, in general, data that keeps some sort of bond with the person from whom the data was gathered. Only two researchers pointed out that they were working just with data and not with research subjects.

The acknowledgment that Big Data are personal data shows that our participants are aware of and attentive to the possible harms that could come to research if their data is not analyzed or collected properly. In fact, two researchers explicitly identified Big Data with a concern about the lack of informed consent.

Our participants’ focus on data as personal data and their awareness of the need for strategies to protect research subjects in Big Data research shows that the avoidance of the term Big Data cannot be attributed to the fear of over-regulation but seems to come exclusively from the feeling of conceptual vagueness surrounding the term. This finding is in contradiction with other studies on the definition of newly developed research technologies such as nanotechnology and biobanks which have shown that avoidance of the term is often associated with scholars fears of stricter regulations upon their research [ 38 , 61 ]. In our study we found no indication of such an attitude.

Finally, a couple of researchers also highlighted that within the academic milieu Big Data is often used to attract funding from external agencies for research purposes. It is important to remember that computational social sciences [ 62 ] and digital humanities [ 4 ] were born thanks to the increased digitalization of society and that Big Data has constituted an important methodological challenge for a large number of “traditional” disciplines in the past years [ 52 ]. While we highly recognize the potential opportunities that Big Data methods are offering to multiple research fields [ 1 – 3 , 6 – 8 ], the exaggerated hype for Big Data research might have also negative consequences. On the one hand, it might detract from the pressing ethical concerns that Big Data is introducing both in society and in research [ 55 , 57 , 63 – 65 ] because of the increasingly bigger promises of beneficial applications that it is offering. On the other, such hype might also aggravate the ambiguity of the term, as it is used as a catch-all to grab the attention of the listener.

In conclusion, the current flexible cultural meaning of Big Data that researchers in the fields of sociology and psychology are making use of might exacerbate the difficulty of clearly defining the term. As Kitchin and McArdle [ 26 ] interestingly note, not all Big Data share the same characteristics and there are multiple forms of Big Data—as there are of small data. This is an instance highlighted also by a couple of our respondents who argued that Big Data in its current cultural meaning it’s a tremendously vast concept that includes different subcategories and specifics that are characterized by different technical and regulatory challenges.

Limitations

First, since our respondents were mainly from the fields of psychology and sociology, the study has overlooked the perspectives of other disciplines relevant for Big Data research, for instance medicine, nursing sciences, statistics, geography, architecture and so on. In addition, the researchers from the field of data sciences that we interviewed were strictly connected to research projects in the fields of the social sciences and psychology. Moreover, due to the interdisciplinary nature of Big Data research, it has been difficult to straightforwardly pinpoint the background of some of the researchers, as many of them have gone through a multidisciplinary academic carrier that qualifies them as experts in more than just one field of research (for instance both social sciences and data science). Finally, it must be acknowledged that the findings from this analysis are not generalizable to the understanding of Big Data of researchers in general, as they are based on only a small portion of researchers from only two disciplines. We therefore argue that more research that takes into account additional disciplines might contribute in delivering a more general picture of what is the researchers’ understanding of Big Data. However, as this is, to the best of our knowledge, one of the first studies that analyses this topic from the perspective of expert academics working in the field, we feel that it is an important contribution towards the conceptual clarification of the term Big Data.

Conclusions

Big Data is an interdisciplinary field that requires the connection of different disciplines and the involvement of heterogeneous research skills in order to carry out projects that fully exploit the methodological novelties that Big Data is bringing to the academic environment [ 66 ]. The traditional V’s definition of Big Data was not deemed adequate by our research participants who preferred a more practical definition.

Even though most of the researchers used the term Big Data to describe their research projects, we identified an overall uncertainty or uneasiness towards the term itself. This finding might be a symptom of the tendency to recognize Big Data as a shifting and evolving cultural and scholarly phenomenon—or a cluster concept that include a plethora of sophisticated and evolving computing methodologies—rather than a clearly defined and single entity, or methodology.

We argue that assuming Big Data as a cultural evolving concept, and therefore the lack of a formal definition, does not come without issues. As Big Data is currently raising many important ethical concerns, conceptual clarity of the term Big Data would be of the outmost importance in order to strategize appropriate guidelines to protect research subjects in Big Data research in different disciplines. The use of the term Big Data as a hyped-up buzzword that is currently enacted in the academic and commercial environment might further aggravate the conceptual vagueness of Big Data.

In order to correctly capture the essence and characteristics of Big Data, it might be necessary to deconstruct or unfold the term into its different constituents, thus shifting from broad generalities to specific qualities relevant not only for scientists, but also for ethics committees and regulators. However, since to the best of our knowledge, only Kitchin and McArdle [ 26 ] have proposed this shift to a more nuanced analysis of the concept of Big Data aimed at unpacking its characteristics, we claim that more research should urgently go into this direction to gain conceptual clarity about what Big Data actually means.

Supporting information

S1 file. interview guide..

Semi structured interview guide that illustrates the main questions and themes that the researchers asked to the participants.

https://doi.org/10.1371/journal.pone.0228987.s001

https://doi.org/10.1371/journal.pone.0228987.s002

  • 1. Salganik M. Bit by bit: Social research in the digital age: Princeton University Press; 2019.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 9. Diebold F. On the origins and development of Big Data: the phenomenon, the term, and the discipline 2012 [ https://economics.sas.upenn.edu/sites/economics.sas.upenn.edu/files/12-037.pdf (Accessed July 2019).
  • 10. Diebold F, editor Big data dynamic factor models for macroeconomic measurement and forecasting. Advances in Economics and Econometrics: Theory and Applications, Eighth World Congress of the Econometric Society,”(edited by Dewatripont M, Hansen LP and Turnovsky S); 2003.
  • 12. Ward JS, Barker A. Undefined by data: a survey of big data definitions. arXiv preprint arXiv:13095821. 2013.
  • 13. IBM. What is big data?—Bringing big data to the enterprise [ http://www-01.ibm.com/software/data/bigdata/ (Accessed July 2019).
  • 16. Mayer-Schönberger V, Cukier K. Big data: A revolution that will transform how we live, work, and think: Houghton Mifflin Harcourt; 2013.
  • 17. Marz N, Warren J. Big Data: Principles and best practices of scalable real-time data systems: New York; Manning Publications Co.; 2015.
  • 18. Perry JS. What is big data? More than volume, velocity and variety… 2017 [ https://developer.ibm.com/dwblog/2017/what-is-big-data-insight/ (Accessed Janury 2018)
  • 19. Commission E. The EU Data Protection Reform and Big Data: Factsheet 2016 [ https://publications.europa.eu/en/publication-detail/-/publication/51fc3ba6-e601-11e7-9749-01aa75ed71a1 (Accessed July 2019).
  • 20. Foundation NS. Core Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-12-499) 2012 [ https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf12499 (Accessed July 2019).
  • 21. Foundation NS. Critical Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-14-543) 2014 [ https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf14543&org=NSF (Accessed July 2019).
  • 22. De Mauro A, Greco M, Grimaldi M, editors. What is big data? A consensual definition and a review of key research topics. AIP conference proceedings; 2015: AIP.
  • 25. Lupton D. The thirteen Ps of big data 2015 [ https://simplysociology.wordpress.com/2015/05/11/the-thirteen-ps-of-big-data/ (Accessed, August 2019).
  • 28. Vayena E, Salathé M, Madoff LC, Brownstein JS. Ethical challenges of big data in public health. Public Library of Science; 2015.
  • 34. Fiske ST, Hauser RM. Protecting human research participants in the age of big data. National Acad Sciences; 2014.
  • 37. Vitak J, Shilton K, Ashktorab Z. Beyond the Belmont Principles: Ethical Challenges, Practices, and Beliefs in the Online Data Research Community. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing—CSCW '162016. p. 939–51.
  • 42. Baeriswyl B. «Big Data»ohne Datenschutz-Leitplanken. digma–die Zeitschrift für Datenrecht und Informationssicherheit. 2013:14–7.
  • 43. Health NIo. Big Data to Knowledge 2019 [ https://commonfund.nih.gov/bd2k (Accessed November 19, 2019).
  • 45. Given LM. 100 questions (and answers) about qualitative research: SAGE Publications; 2015.
  • 46. Urquhart C. Grounded theory for qualitative research: A practical guide: Sage; 2012.
  • 47. Guest G, MacQueen KM, Namey EE. Applied thematic analysis: Sage Publications; 2011.
  • 49. Wittgenstein L. Philosophical investigations: John Wiley & Sons; 2009.
  • 51. SAS-Institute. Big Data. What it is and why it matters.
  • 53. Katal A, Wazid M, Goudar R, editors. Big data: issues, challenges, tools and good practices. 2013 Sixth international conference on contemporary computing (IC3); 2013: IEEE.
  • 54. Dumbill E. Making sense of big data. Mary Ann Liebert, Inc. 140 Huguenot Street, 3rd Floor New Rochelle, NY 10801 USA; 2013.

School of Nursing

  • Which Program is Right for Me Overview
  • Comparing the PhD & DNP
  • Technical Standards
  • Bachelor of Science in Nursing
  • Master of Nursing
  • PhD in Nursing
  • Doctor of Nursing Practice
  • Certificate Overview
  • Adult Gerontological Acute Care Nurse Practitioner Certificate
  • Leadership in Health Information Technology for Health Professionals Certificate
  • Population Health Informatics and Technology (PHIT) Certificate
  • Post-Graduate Certificate Program Overview
  • Meet Our Students
  • Current Student Resources
  • Office of Student & Career Advancement Services
  • Research Overview
  • Research Projects
  • Research Day
  • Faculty Grants
  • Faculty Experts
  • Laboratory of Clinical Exercise Physiology
  • Publications Overview
  • Books by Faculty
  • Student Research Resources
  • About the Office of Nursing Research and Scholarship
  • News & Events Overview
  • Minnesota Nursing Magazine Overview
  • Current Issue
  • Magazine Issue Archive
  • Clinical Preceptors Overview
  • BSN/MN (Prelicensure) Preceptor Information
  • DNP Preceptor Information
  • Clinical Preceptor e-Toolkit
  • Nurse-midwifery specialty
  • DNP Project Ideas
  • School of Nursing at a Glance
  • School of Nursing Leadership
  • Our Faculty
  • Faculty Emeriti & Faculty Ad Honorem
  • Cooperatives
  • Faculty Honors and Awards
  • Employment Opportunities
  • Our Facilities Overview
  • Bentson Health Communities Innovation Center
  • Bakken Center for Spirituality and Healing
  • Global Health Overview
  • Visiting Scholars
  • Global Collaborations
  • Global Health Faculty Scholars
  • Opportunities for Students
  • Inclusivity, Diversity & Equity
  • Professional Development
  • Giving to Nursing Overview
  • Areas of Need
  • Scholarships
  • Impact Stories
  • Ways to Give
  • Board of Trustees
  • History Overview
  • Heritage Committee
  • History of Our School Leadership
  • Program Histories
  • Historical Videos and Photos
  • Sigma Theta Tau International Overview
  • Zeta Chapter Board
  • Grants & Awards
  • Volunteer Opportunities
  • Alumni Overview
  • Submit a Class Note
  • Alumni Society Awards Overview
  • Rising Star Award
  • Distinguished Alumni Humanitarian Award
  • Excellence in Practice Award
  • Board of Directors
  • Degree Verification
  • Volunteering
  • Centers Overview
  • Center for Adolescent Nursing Overview
  • Center Projects
  • Learning Opportunities
  • Publications
  • Center for Aging Science & Care Innovation Overview
  • Center for Child and Family Health Promotion Research Overview
  • Get Connected
  • Center projects
  • Center for Nursing Informatics Overview
  • 2025 Nursing Knowledge: Big Data Science Conference
  • Previous Conferences
  • Nursing Knowledge Big Data Science Initiative
  • Center for Planetary Health and Environmental Justice Overview
  • Katharine J. Densford International Center for Nursing Leadership Overview
  • Leadership Model
  • Directorate
  • Initiatives
  • Contact the Center
  • 2024 Planting Seeds of Innovation Colloquium
  • Admission Requirements
  • Required Courses
  • Admissions Requirements
  • How to Apply
  • Post-Graduate Certificate Program Plans by Specialty
  • Message from the Dean
  • Board of Visitors
  • Adult & Gerontological Health
  • Child & Family Health
  • Population Health & Systems
  • Tenured Faculty Opportunities
  • Tenure-Track Faculty
  • Clinical Track Faculty Opportunities
  • Guidance for Applicants for faculty positions
  • Marie Manthey Endowed Professorship
  • Leaving a Nursing Legacy
  • 100 Distinguished Nursing Alumni (A-E)
  • 100 Distinguished Nursing Alumni (F-J)
  • 100 Distinguished Nursing Alumni (K-O)
  • 100 Distinguished Nursing Alumni (P-T)
  • 100 Distinguished Nursing Alumni (U-Z)
  • Distinguished Faculty Alumni
  • Program Histories Overview
  • Adult-Gero Nurse Practitioner Programs
  • DNP Program
  • Geriatric Nurse Practitioner/Geriatric Clinical Nurse Specialist
  • Health Innovation and Leadership
  • Integrative Health and Healing
  • Master of Nursing Administration
  • Nurse Anesthesia
  • Nurse Midwifery
  • Nursing Informatics
  • Pediatric Primary Care Nurse Practitioner
  • PhD Program
  • Practical Nursing Program
  • Pre-licensure programs
  • Psychiatric-Mental Health
  • Public Health Nursing
  • Women's Health Nurse Practitioner
  • Evidence-based Practice Grants
  • Research Grants
  • Travel Grants
  • Summer Institute in Adolescent Health
  • Past Summer Institutes in Adolescent Health
  • Clinical Teaching in Nursing Homes
  • Resources for Clinical Teaching in Nursing Homes
  • Competencies for Public Health Nursing Practice Instrument
  • Doctoral Education Pathway for American Indian/Alaska Native Nurses
  • Online Teaching Resources
  • 2022 Nursing Knowledge: Big Data Science Conference
  • 2020 Nursing Knowledge: Big Data Science Conference
  • Accreditation
  • Call for Abstracts
  • Registration
  • 2019 Workgroups
  • 2018 Workgroups
  • 2017 Nursing Knowledge: Big Data Science Conference
  • 2016 Workgroups
  • 2015 Workgroups
  • 2014 Nursing Knowlege: Big Data Science Conference
  • 2013 Nursing Knowledge: Pre-conference Materials
  • Vision and Mission
  • Five-Year Strategic Plan
  • Steering committee
  • Resources for Workgroup Members
  • Nursing Knowledge: Big Data Science Conference
  • Nursing Big Data Repository
  • Contact information and social media
  • Foresight Leadership
  • Open access
  • Published: 29 May 2021

Big data quality framework: a holistic approach to continuous quality management

  • Ikbal Taleb 1 ,
  • Mohamed Adel Serhani   ORCID: orcid.org/0000-0001-7001-3710 2 ,
  • Chafik Bouhaddioui 3 &
  • Rachida Dssouli 4  

Journal of Big Data volume  8 , Article number:  76 ( 2021 ) Cite this article

34k Accesses

43 Citations

4 Altmetric

Metrics details

Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.

Introduction

Big Data is universal [ 1 ], it consists of large volumes of data, with unconventional types. These types may be structured, unstructured, or in a continuous motion. Either it is used by the industry and governments or by research institutions, a new way to handle Big Data from a technology perspective to research approaches in its management is highly required to support data-driven decisions. The expectation from Big Data analytics varies from trends finding to pattern discovery in different application domains such as healthcare, businesses, and scientific exploration. The aim is to extract significant insights and decisions. Extracting this precious information from large datasets is not an easy task. A devoted planning and appropriate selection of tools and techniques are available to optimize the exploration of Big Data.

Owning a huge amount of data does not often lead to valuable insights and decisions since Big Data does not necessarily mean Big insights. In fact, it can complicate the processes involved in fulfilling such expectations. Also, a lot of resources may be required, in addition to adapting the existing analytics algorithms to cope with Big Data requirements. Generally, data is not ready to be processed as it is. It should go through many stages, including cleansing and pre-processing, before undergoing any refining, evaluation, and preparation treatment for the next stages along its lifecycle.

Data Quality (DQ) is a very important aspect of Big Data for assessing the aforementioned pre-processing data transformations. This is because Big Data is mostly obtained from the web, social networks, and the IoT, where they may be found in a structured or unstructured form with no schema and eventually with no quality properties. Exploring data profiling, and more specifically, DQ profiling is essential before data preparation and pre-processing for both structured and unstructured data. Also, a DQ assessment should be conducted for all data-related content, including attributes and features. Then, an analysis of the assessment results can provide the necessary elements to enhance, control, monitor, and enforce the DQ along the Big Data lifecycle; for example, maintaining high Data Quality (conforming to its requirements) in the processing phase.

Data Quality has been an active and attractive research area for several years [ 2 , 3 ]. In the context of Big Data, quality assessment processes are hard to implement, since they are time- and cost-consuming, especially for the pre-processing activities. These issues have got intensified since the available quality assessment techniques were developed initially for well-structured data and are not fully appropriate for Big Data. Consequently, new Data Quality processes must be carefully developed to assess the data origin, domain, format, and type. An appropriate DQ management scheme is critical when dealing with Big Data. Furthermore, Big Data architectures do not incorporate quality assessment practices throughout the Big Data lifecycle apart from pre-processing. Some new initiatives are still limited to specific applications [ 4 , 5 , 6 ]. However, the evaluation and estimation of Big Data Quality should be handled in all phases of the Big Data lifecycle from data inception to its analytics, thus support data-driven decisions.

The work presented in this paper is related to Big Data Quality management through the Big Data lifecycle. The objective of such a management perspective is to provide users or data scientists with a framework capable of managing DQ from its inception to its analytics and visualization, therefore support decisions. The definition of acceptable Big Data quality depends largely on the type of applications and Big Data requirements. The need for a quality Big Data evaluation before engaging in any Big Data related project is imminent. This is because the high costs involved in processing useless data at an early stage of its lifecycle can be prevented. More challenges to the data quality evaluation process may occur when dealing with unstructured, schema-less data collected from multiples sources. Moreover, a Big Data Quality Management Framework can provide quality management mechanisms to handle and ensure data quality throughout the Big Data lifecycle by:

Improving the processes of the Big Data lifecycle to be quality-driven, in a way that it integrates quality assessment (built-in) at every stage of the Big Data architecture.

Providing quality assessment and enhancement mechanisms to support cross-process data quality enforcement.

Introducing the concept of Big Data Quality Profile (DQP) to manage and trace the whole data pre-processing procedures from data source selection to final pre-processed data and beyond (processing and analytics).

Supporting profiling of data quality and quality rules discovery based on quantitative quality assessments.

Supporting deep quality assessment using qualitative quality evaluations on data samples obtained using data reduction techniques.

Supporting data-driven decision making based on the latest data assessments and analytics results.

The remainder of this paper is organized as follows. In Sect. " Overview and background ", we provide ample detail and background on Big Data and data quality, besides, the introduction of the problem statement, and the research objectives. The research literature related to Big Data quality assessment approaches is presented in Sect. " Related research studies ". The components of the proposed framework and an explanation of their main functionalities are described in Sect. " Big data quality management framework ". Finally, implementation discussion and dataflow management are detailed in Sect. " Implementations: Dataflow and quality processes development ", whereas Sect. " Conclusion " concludes the paper and points to our ongoing research developments.

Overview and background

An exponential increase in global inter-network activities and data storage has triggered the Big Data Era. Moreover, application domains, including Facebook, Amazon, Twitter, YouTube, Internet of Things Sensors, and mobile smartphones, are the main players and data generators. The amount of data generated daily is around 2.5 quintillion bytes (2.5 Exabyte, 1 EB = 1018 Bytes).

According to IBM, Big Data is a high-volume, high-velocity, and high-variety information asset that demands cost-effective, innovative forms of information processing for enhanced insights and decision-making. It is used to describe a massive volume of both structured and unstructured data; therefore, Big Data processing using traditional database and software tools is a difficult task. Big Data also refers to the technologies and storage facilities required by an organization to handle and manage large amounts of data.

Originally, in [ 7 ], the McKinsey Global Institute identifies three Big Data characteristics commonly known as ''3Vs'' for Volume, Variety, and Velocity [ 1 , 7 , 8 , 9 , 10 , 11 ]. These characteristics have been extended to more dimensions, moving to 10 Vs (Volume, Velocity, Variety, Veracity, Value, Vitality, Viscosity, Visualization, Vulnerability) [ 12 , 13 , 14 ].

In [ 10 , 15 , 16 ], the authors define important Big Data systems architectures. The data in Big Data comes from (1) heterogeneous data sources (e-Gov: Census data, Social networking: Facebook, and Web: Google page rank data), (2) data in different formats (video, text), and (3) data of various forms (unstructured: raw text data with no schema, and semi-structured: metadata, graph structure as text). Moreover, data travels through different stages, composing the Big Data lifecycle. Many aspects of Big Data architectures were compiled from the literature. Our enhanced design contributions are illustrated in Fig.  1 and described as follows:

Data generation: this is the phase of data creation. Many data sources can generate this data such as electrophysiology signals, sensors used to gather climate information, surveillance devices, posts to social media sites, videos and still images, transaction records, stock market indices, GPS location, etc.

Data acquisition: it consists of data collection, data transmission, and data pre-processing [ 1 , 10 ]. Due to the exponential growth and availability of heterogeneous data production sources, an unprecedented amount of structured, semi-structured, and unstructured data is available. Therefore, the Big Data Pre-Processing consists of typical data pre-processing activities: integration, enhancements and enrichment, transformation, reduction, discretization, and cleansing .

Data storage: it consists of the data center infrastructure, where the data is stored and distributed among several clusters and data centers, spread geographically around the world. The software storage is supported by the Hadoop ecosystem to ensure a certain degree of fault tolerance storage reliability and efficiency through replication. The data storage stage is responsible for all input and output data that circulates within the lifecycle.

Data analysis: (Processing, Analytics, and Visualization); it involves the application of data mining and machine learning algorithms to process the data and extract useful insights for better decision making. Data scientists are the most valuable users of this phase since they have the expertise to apply what is needed, on what must be analyzed.

figure 1

Big data lifecycle value chain

Data quality, quality dimensions, and metrics

The majority of studies in the area of DQ originate from the database context [ 2 , 3 ] and management research communities. According to [ 17 ], DQ is not an easy concept to define. Its definition is data domain awareness. There is a consensus that data quality always depends on the quality of the data source [ 18 ]. However, it highlights that enormous quality issues are hidden inside data and their values.

In the following, the definitions of data quality, data quality dimensions, and quality metrics and their measurements are given:

Data quality: It has many meanings that are related to data context, domain, area, and the fields from which it is used [ 19 , 20 ]. Academia interprets DQ differently than industry. In [ 21 ], data quality is reduced to “The capability of data to satisfy stated and implied needs when used under specified conditions”. Also, DQ is defined as “fitness for use”. Yet, [ 20 ] define data quality as the property corresponding to quality management, which is appropriate for use or meeting user needs.

Data quality dimensions: DQD’s are used to measure, quantify, and manage DQ [ 20 , 22 , 23 ]. Each quality dimension has a specific metric, which measures its performance. There are several DQDs, which can be organized into 4 categories according to [ 24 , 25 ], intrinsic, contextual, accessibility, and representational [ 14 , 15 , 22 , 24 , 26 , 27 ]. Two important categories (intrinsic and contextual) are illustrated in Fig.  2 . Examples of intrinsic quality dimensions are illustrated in Table 1 .

Metrics and measurements: Once the data is generated, its quality should be measured. This means that a data-driven strategy is considered to act on the data. Hence, it is mandatory to measure and quantify the DQD. Structured or semi-structured data is available as a set of attributes represented in columns or rows, and their values are respectively recorded. In [ 28 ], a quality metric, as a quantitative or categorical representation of one or more attributes, is defined. Any data quality metric should define whether the values of an attribute respect a targeted quality dimension. The author [ 29 ], quoted that data quality measurement metrics tend to evaluate binary results: correct or incorrect, or a value between 0 and 100 (with 100% representing the highest). This applies to some quality dimensions such as accuracy, completeness, consistency, and currency. Examples of DQD metrics are illustrated in Table 2 .

figure 2

Data quality dimensions

DQD’s must be relevant to data quality problems that have been identified. Thus, a metric tends to measure if attributes comply with defined DQD’s. These measurements are performed for each attribute, given their type and data ranges of values collected from the data profiling process. The measurements produce DQD’s scores for the designed metrics of all attributes [ 30 ]. Specific metrics need to be defined, to estimate specific quality dimensions of other data types such as images, videos, and audio [ 5 ].

Big data characteristics and data quality

The main Big Data characteristics, commonly named as V’s, are initially, Volume, Velocity, Variety, and Veracity. Since the Big Data inception, 10 V’s have been defined, and probably new Vs will be adopted [ 12 ]. For example, veracity tends to express and describe the trustworthiness of data, mostly known as data quality. The accuracy is often related to precision, reliability, and veracity [ 31 ]. Our tentative mapping among these characteristics, data, and data quality, is shown in Table 3 . It is based on the intuitive studies accomplished by [ 5 , 32 , 33 ]. In these studies, the authors attempted to link the V’s to the data quality dimensions. In another study, the authors [ 34 ] addressed the mapping of DQD Accuracy with the Big Data characteristic Volume and showed that the data size has an impact on DQ.

Big data lifecycle: where quality matters?

According to [ 21 , 35 ], data quality issues may appear in each phase of the Big Data value chain. Addressing data quality may follow different strategies, as each phase has its features either improving the quality of existing data or/and refining, reassessing, redesigning the whole processes, which generate and collect data, aiming at improving their quality.

Big Data quality issues were addressed by many studies in the literature [ 36 , 37 , 38 ]. These studies generally elaborated on the issues and proposed generic frameworks with no comprehensive approaches and techniques to manage quality across the Big Data lifecycle. Among these, generic frameworks are presented in [ 5 , 39 , 40 ].

In Fig.  3 , it is illustrated where data quality can and must be addressed in the Big Data value chain phases/stages from (1) to (7).

In the data generation phase, there is a need to define how and what data is generated.

In the data transmission phase, the data distribution scheme relies on the underlying networks. Unreliable networks may affect data transfer. Its quality is expressed by data loss and transmission errors.

Data collection refers to where, when, and how the data is collected and handled. Well-defined structured constraint verification on data must be established.

The pre-processing phase is one of the main focus points of the proposed work. It follows a data-driven strategy, which is largely focused on data. An evaluation process provides the necessary means to ensure the quality of data for the next phases. An evaluation of the DQ before (Pre) and after (Post) pre-processing on data samples is necessary to strengthen the DQP.

In the Big Data storage phase, some aspects of data quality, such as storage failure, are handled by replicating data on multiple storages. The latter is also valid for data transmission when a network fails to transmit data.

In the Data Processing and Analytics phases, the quality is influenced by both the applied process and data quality itself. Among the various data mining and machine learning algorithms and techniques suitable for Big Data, those that converge rapidly and consume fewer cloud resources will be highly adopted. The relation between DQ and the processing methods is substantial. A certain DQ requirement on these methods or algorithms might be imposed to ensure efficient performance.

Finally, for an ongoing iterative value chain, the visualization phase seems to be only a representation of the data in a fashionable way such as a dashboard. This helps the decision-makers to have a clear picture of the data and its valuable insights. Finally, in this work, Big Data is transformed into useful Small Data, which is easy to visualize and interpret.

figure 3

Where quality matters in big data lifecycle?

Data quality issues

Data quality issues generally appear when the quality requirements are not met on the data values [ 41 ]. These issues are due to several factors or processes having occurred at different levels:

Data source level: unreliability, trust, data copying, inconsistency, multi-sources, and data domain.

Generation level: human data entry, sensors’ readings, social media, unstructured data, and missing values.

Process level (acquisition: collection, transmission).

In [ 21 , 35 , 42 ], many causes of poor data quality were enumerated, and a list of elements, which affect the quality and DQD’s was produced. This list is illustrated in Table 4 .

Related research studies

Research directions on Big Data differ between industry and academia. Industry scientists mainly focus on the technical implementations, infrastructures, and solutions for Big Data management, whereas researchers from academia tackle theoretical issues of Big Data. Academia’s efforts mainly include the development of new algorithms for data analytics, data replication, data distribution, and optimization of data handling. In this section, the literature review is classified into 3 categories, which are described in the following sub-sections.

Data quality assessment approaches

Existing studies on data quality have been approached from different perspectives. In the majority of the papers, the authors agree that data quality is related to the phases or processes of its lifecycle [ 8 ]. Specifically, data quality is highly related to the data generation phases and/or with its origin. The methodologies adopted to assess data quality are based on traditional data strategies and should be adapted to Big Data. Moreover, the application domain and type of information (Content-based, Context-based, or Rating-based) affects the way the quality evaluation metrics are designed and applied. In content-based quality metrics, the information itself is used as a quality indicator, whereas in context-based metrics meta-data is used as quality indicators.

There are two main strategies to improve data quality according to [ 20 , 23 ]: data-driven and process-driven. The first strategy handles the data quality in the pre-processing phase by applying some pre-processing activities (PPA) such as cleansing, filtering, and normalization. These PPAs are important and occur before the data processing stage, preferably as early as possible. However, the process-driven quality strategy is applied to each stage of the Big Data value chain.

Data quality assessment was discussed early in the literature [ 10 ]. It is divided into two main categories: subjective and objective. Moreover, an approach that combines these two categories to provide organizations with usable data quality metrics to evaluate their data was proposed. However, the proposed approach was not developed to deal with Big Data.

In summary, Big Data quality should be addressed early in the pre-processing stage during the data lifecycle. The aforementioned Big Data quality challenges have not been investigated in the literature from all perspectives. There are still many open issues, which must be addressed especially at the pre-processing stage.

Rule-based quality methodologies

Since the data quality concept is context-driven, it may differ from an application domain to another. The definition of quality rules involves establishing a set of constraints on data generation, entry, and creation. Poor data can always exist, and rules are created or discovered to correct or eliminate this data. Rules themselves are only one part of the data quality assessment approach. The necessity to establish a consistent process for creating, discovering, and applying the quality rules should consider the following:

Characterize the quality of data being good or bad from its profile and quality requirements.

Select the data quality dimensions that apply to the data quality assessment context.

Generate quality rules based on data quality requirements, quantitative, and qualitative assessments.

Check, filter, optimize, validate, run, and test rules on data samples for efficient rules’ management.

Generate a statistical quality profile with quality rules. These rules represent an overview of successful valid rules with the expected quality levels.

Hereafter, the data quality rules are discovered from data quality evaluation. These rules will be used in Big Data pre-processing activities to improve the quality of data. The discovery process reveals many challenges, which should consider different factors, including data attributes, data quality dimensions, data quality rules discovery, and their relationship with pre-processing activities.

In (Lee et al., 2003), the authors concluded that the data quality problems depend on data, time, and context. Quality rules are applied to the data to solve and/or avoid quality problems. Accordingly, quality rules must be continuously assessed, updated, and optimized.

Most studies on the discovery of data quality rules come from the database community. These studies are often based on conditional functional dependencies (CFDs) to detect inconsistencies in data. CFDs are used to formulate data quality rules, which are generally expressed manually and discovered automatically using several CFD approaches [ 3 , 43 ].

Data quality assessment in Big Data has been addressed in several studies. In [ 32 ], a Data Quality-in-Use model was proposed to assess the quality of Big Data. Business rules for data quality are used to decide on which data these rules must meet the pre-defined constraints or requirements. In [ 44 ], a new quality assessment approach was introduced and involved both the data provider and the data consumer. The assessment was mainly based on data consistency rules provided as metadata.

The majority of research studies on data quality and discovery of data quality rules are based on CFD’s and database. In Big Data quality, the size, variety, and veracity of data are key characteristics that must be considered. These characteristics should be processed to reduce the quality assessment time and resources since they are handled before the pre-processing phase. Regarding quality rules, it is fundamental to consider these rules to eliminate poor data and enforce quality on existing data, while following a data-driven quality context.

Big data pre-processing frameworks

The pre-processing of data before performing any analytics is primeval. However, several challenges have emerged at this crucial phase of the Big Data value chain [ 10 ]. Data quality is one of these challenges, which must be highly considered in the Big Data context.

As pointed out in [ 45 ], data quality problems arise when dealing with multiple data sources. This increases the requirements for data cleansing significantly. Additionally, the large size of datasets, which arrive at an uncontrolled speed, generates an overhead on the cleansing processes. In [ 46 , 47 , 48 ], NADEEF, an extensible data cleaning system, was proposed. The extension for Big Data cleaning based on NADEEF was presented in [ 49 ] for streaming data. The system deals with data quality from the data cleaning activity using data quality rules and functional dependencies rules [ 14 ].

Numerous other studies on Big Data management frameworks exist. In these studies, the authors surveyed and proposed Big Data management models dealing with storage, pre-processing, and processing [ 50 , 51 , 52 ]. An up-to-date review of the techniques and methods for each process involved in the management processes is also included.

The importance of quality evaluation in Big Data Management has not been, generally, addressed. In some studies, Big Data characteristics are the only recommendations for quality. However, no mechanisms have been proposed to map or handle quality issues that might be a consequence of these Big Data Vs. A Big Data Management Framework, which includes data quality management, must be developed to cope with end-to-end quality management across the Big Data lifecycle.

Finally, it is worth mentioning that research initiatives and solutions on Big Data quality are still in their preliminary phase; there is much to do on the development and standardization of Big Data quality. Big Data quality is a multidisciplinary, complex, and multi-variant domain, where new evaluation techniques, processing and analytics algorithms, storage and processing technologies, and platforms will play a key role in the development and maturity of this active research area. We anticipate that researchers from academia will contribute to the development of new Big Data quality approaches, algorithms, and optimization techniques, which will advance beyond the traditional approaches used in databases and data warehouses. Additionally, industries will lead development initiatives of new platforms, solutions, and technologies optimized to support end-to-end quality management within the Big Data lifecycle.

Big data quality management framework

The purpose of proposing a Big Data Quality Management Framework (BDQMF) is to address the quality at all stages of the Big Data lifecycle. This can be achieved by managing data quality before and after the pre-processing stage while providing feedback at each stage and loop back to the previous phase, whenever possible. We also believe that data quality must be handled at data inception. However, this is not considered in this work.

To overcome the limitations of the existing Big Data architectures for managing data quality, a Big Data Quality pre-processing approach is proposed: a Quality Framework [ 53 ]. In our framework, the quality evaluation process tends to extract the actual quality status of Big Data and proposes efficient actions to avoid, eliminate, or enhance poor data, thus improving its quality. The framework features the creation and management of a DQP and its repository. The proposed scheme deals with data quality evaluation before and after the pre-processing phase. These practices are essential to ensure a certain quality level for the next phases while maintaining the optimal cost of the evaluation.

In this work, a quantitative approach is used. This approach consists of an end-to-end data quality management system that deals with DQ through the execution of pre-pre-processing tasks to evaluate BDQ on data. It starts with data sampling, data and DQ profiling, and gathering user DQ requirements. It then proceeds to DQD evaluation and discovery of Quality rules from quality scores and requirements. Each data quality rule is represented by one-to-many Pre-Processing Functions (PPF’s) under a specific Pre-Processing Activity (PPA). A PPA, such as cleansing, aims at increasing data quality. Pre-processing is applied to Big Data samples and re-evaluated once again to update and certify that the quality profile is complete. It is applied to the whole Big Dataset, not only to data samples. Before pre-processing, the DQP is tuned and revisited by quality experts for endorsement based on an equivalent data quality report. This report states the quality scores of the data, not the rules.

Framework description

The BDQM framework is illustrated in Fig.  4 , where all the components cooperate, relying on the Data Quality Profile. It is initially created as a Data Profile and is progressively extended from the data collection phase to the analytics phase to capture important quality-related information. For example, it contains quality requirements, targeted data quality dimensions, quality scores, and quality rules.

figure 4

Big data sources

Data lifecycle stages are part of the BDQMF. Generated feedbacks in all the stages are analyzed and used to correct, improve the data quality, and detect any DQ management related failures. The key components of the proposed BDQMF include:

Big Data Quality Project (Data Sources, Data Model, User/App Quality Requirements, Data domain),

Data Quality Profile and its Repository,

Data Preparation (Sampling and Profiling),

Exploratory Quality Profiling,

Quality Parameters and Mapping,

Quantitative Quality Evaluation,

Quality Control,

Quality Rules Discovery,

Quality Rules Validation,

Quality Rules Optimization,

Big Data Pre-Processing,

Data Processing,

Data Visualization, and

Quality Monitoring.

A detailed description of each of these components is provided hereafter.

Framework key components

In the following sub-sections, each component is described. Its input(s) and output(s), its main functions, and its roles and interactions with the other framework’s components, are also described. Consequently, at each Big Data stage, the Data Quality Profile is created, updated, and adapted until it achieves the quality requirements already set by the users or applications at the beginning of the Big Data Quality Project.

Big data quality project module

The Big Data Quality Project Module contains all the elements that define the data sources, and the quality requirements set by either the Big Data users or Big Data applications to represent the quality foundations of the Big Data project. As illustrated in Error! Reference source not found., any Big Data Quality Project should specify a set of quality requirements as targeted quality goals (Fig. 5 ).

It represents the first module of the framework. The Big Data quality project represents the starting point of the BDQMF, where specifications of the data model, data sources, and targeted quality goals for DQD and data attributes are defined. These requirements are represented as data quality scores/ratios, which express the acceptance level of the evaluated data quality dimensions. For example, 80% of data accuracy, 60% data completeness, and 85% data consistency are judged by quality experts as accepted levels (or tolerance ratios). These levels can be relaxed using a range of values, depending on the context, the application domain, and the targeted processing algorithm’s requirements.

Let us denote by BDQP(DS , DS’ , Req) a Big Data Quality Project Request that initiates many automatic processes:

A data sampling and profiling process.

An exploratory quality profiling process, which is included in many quality assessment procedures.

A pre-processing phase is eventually considered if the resulted quality scores are not met.

The BDQP contains the input dataset DS , output dataset DS’ , and Req . The Quality requirements are presented as a tuple of sets Req  = ( D , L , A ), where:

D represents a set of data quality dimensions DQD’s (e.g., accuracy, consistency): \({D}=\left\{{{\varvec{d}}}_{0},\dots ,{{\varvec{d}}}_{{\varvec{i}}},\dots ,{{\varvec{d}}}_{{\varvec{m}}}\right\},\)

L is a set of DQD acceptance (tolerance) level ratios (%) set by the user or the application related to the quality project and associated with each DQD, respectively: \({L}=\left\{{{\varvec{l}}}_{0},\dots ,{{\varvec{l}}}_{{\varvec{i}}},\dots ,{{\varvec{l}}}_{{\varvec{m}}}\right\},\)

A is the set of targeted data attributes. If it is not specified, the DQD’s are assessed for the dataset, which includes all possible attributes, since some dimensions need more detailed requirements to be assessed. Therefore, it depends on the DQD and the attribute type: \({A}=\left\{{{\varvec{a}}}_{0},\dots ,{{\varvec{a}}}_{{\varvec{i}}},\dots ,{{\varvec{a}}}_{{\varvec{m}}}\right\}\)

The Data quality requirements might be updated with some more aspects, whereas the profiling component provides well-detailed information about the data ( DQP Level 0 ). This update is performed within the quality mapping component and interfaces with user experts to refine, reconfirm, and restructure their data quality parameters over the data attributes.

Data sources: There are multiple Big Data sources. Most of them are generated from the new media (e.g., social media) based on the internet. Other data sources are based on the context of new technologies such as the cloud, sensors, and IoT. A list of Big Data sources is illustrated in Error! Reference source not found.

Data users, data applications, and quality requirements: This module identifies and specifies the input sources of the quality requirements parameters for the data sources. These sources include user’s quality requirements (e.g., Domain Experts, Researchers, Analysts, and Data scientists) or application quality requirements. (Applications may vary from simple data processing to machine learning applications or AI-based applications). For the users, a dashboard-like interface is used to capture user’s data requirements and other quality information. This interface can be enriched with information from the data sources as attributes and their types, if available. This can efficiently guide users to the inputs and ensure the right data is used. This phase can be initiated after sample profiling or exploratory quality profiling. Otherwise, a general quality request is entered in the form of targeted Data Quality dimensions and their expected quality scores after the pre-processing phase. All the quality requirements parameters and settings are recorded in the Data Quality Profile ( DQP 0 ). DQP Level 0 is created when the quality project is set.

The quality requirements are specifically set as quality score ratios, goals, or targets to be achieved by the BDQMF. They are expressed as targeted DQDs in the Big Data Quality Project.

Let us denote by Req , a set of quality requirements presented as Req = \(\left\{{{\varvec{r}}}_{0},\dots ,{{\varvec{r}}}_{{\varvec{i}}},\dots ,{{\varvec{r}}}_{{\varvec{m}}}\right\}\) and constructed with a tuple ( D , L, A ). The Req quality requirements list is identified by elements, where each of these elements is a quality requirement characterized by \({{\varvec{r}}}_{{\varvec{i}}}=\left({{\varvec{d}}}_{{\varvec{i}}},{{\varvec{l}}}_{{\varvec{i}}},{{\varvec{a}}}_{{\varvec{i}}}\right)\) ; \({{\varvec{r}}}_{{\varvec{i}}}\) represents a \({{\varvec{d}}}_{{\varvec{i}}}\) in the DQD with a minimum accepted ratio level \({{\varvec{l}}}_{{\varvec{i}}}\) for all or a sub-list of selected attributes \({{\varvec{a}}}_{{\varvec{i}}}.\)

The initial DQP originating from this module is a DQP Level 0, containing the following tuple, as illustrated in Fig.  6 : BDQP (DS, DS’, Req) with Req  =  ( D , L, A )

Data models and data domains

Data models: If the Data is structured, then a schema is provided to add more detailed quality settings for all attributes. In other cases, if there are no such attributes or types, the data is considered as unstructured data, and its quality evaluation will consist of a set of general Quality Indicators (QI). In our Framework, these QI are provided especially for the cases, where a direct identification of DQD’s is not available for an easy quality assessment.

Data domains: Each data domain has a unique set of default quality requirements. Some are very sensitive to accuracy and completeness; others, prioritize data currency and higher timeliness. This module adds value to users or applications when it comes to quality requirements elicitation.

figure 6

BDQP and quality requirements settings

figure 7

Exploratory quality profiling modules

Data quality profile creation: Once the Big Data Quality Project (BDQP) is initiated, the DQP level 0 (DQP0) is created and consists of the following elements, as illustrated in Fig. 7 :

Data sources information, which may include datasets, location, URL, origin, type, and size.

Information about data that can be created or extracted from metadata if available, such as database schema, data attributes names and types, data profile, or basic data profile.

Data domains such as business, health, commerce, or transportation.

Data users, which may include the names and positions of each member of the project, security credentials, and data access levels.

Data application platforms, software, programming languages, or applications that are used to process the data. These may include R, Python, Java, Julia, Orange, Rapid Miner, SPSS, Spark, and Hadoop.

Data quality requirements: for each dataset, its expected quality ratios, and tolerance levels are accepted; otherwise, the data is discarded or repaired. It can also be set as a range of quality tolerance levels. For example, the DQD completeness is defined as equal to or higher than 67%, which means the acceptance ratio of missing values, is equal to or less than 33% (100% –67%).

Data quality profile (DQP) and repository (DQPREPO)

We describe hereafter the content of DQP and the DQP repository and the DQP levels captured through the lifecycle of framework processes.

  • Data quality profile

The data quality profile is generated once a Big Data Quality Project is created. It contains, for example, information about the data sources, domain, attributes, or features. This information may be retrieved from metadata, data provenance, schema, or from the dataset itself. If not available, data preparation (sampling and profiling) is needed to collect and extract important information, which will support the upcoming processes, as the Data Profile (DP) is created.

An Exploratory Quality Profiling will generate a quality rules proposal list. The DP is updated with these rules and converted into a DQP. This will help the user to obtain an overview of some DQDs and make better attributes selection based on this first quality approximation with a ready-to-use list of rules for pre-processing.

The User/App quality requirements (Quality tolerance levels, DQDs, and targeted attributes) are set and added to the DQP. Updated and tuned-up previously proposed quality rules are more likely, or a complete redefinition of the quality requirement parameters is performed.

The mapping and selection phase will update the DQP with a DQES, which contains the set of attributes to be evaluated for a set of DQDs, using a set of metrics from the DQP repository.

The Quantitative Quality Evaluation component assesses the DQ and updates the DQES with DQD Scores.

The DQES scores pass through quality control if validated. The DQP is executed in the pre-processing stage and confirmed in the repository.

If the scores (based on the quality requirements) are not valid, a quality rules discovery, validation, and optimization will be added/updated to the DQP configuration to obtain a valid DQD score that satisfies the quality requirements.

A continuous quality monitoring is performed for an eventual DQ failure that triggers a DQP update.

The DQP Repository: The DQPREPO contains detailed data quality profiles per data source and dataset. In the following, an information list managed by the repository is presented:

Data Quality User/App requirements.

Data Profiles, Metadata, and Data Provenance.

Data Quality Profiles (e.g. Data Quality Evaluation Schemes, and Data Quality Rules).

Data Quality Dimensions and related Metrics (metrics formulas and aggregate functions).

Data Domains (DQD’s, BD Characteristics).

DQD’s vs BD Characteristics.

Pre-processing Activities (e.g. Cleansing, and Normalizing) and functions (to replace missing values).

DQD’s vs DQ Issues vs PPF: Pre-processing Functions.

DQD’s priority processing in Quality Rules.

At every stage, module, task, or process, the DQP repository is incrementally updated with quality-related information. This includes, for example, quality requirements, DQES, DQD scores, data quality rules, Pre-Processing activities, activity functions, DQD metrics, and Data Profiles. Moreover, the DQP’s are organized per Data Domain and datatype to allow reuse. Adaptation is performed in the case of additional Big Datasets.

In Table 5 , an example of DQP Repository managed information along with its preprocessing activities (PPA) and their related functions (PPAF), is presented.

DQP lifecycle (Levels) : The DQP goes through the complete process flow of the proposed BDQMF. It starts with the specification of the Big Data Quality Project and ends with quality monitoring as an ongoing process that closes the quality enforcement loop and triggers other processes, which handle DQP adaptation, upgrade, or reuse. In Table 6 , the various DQP levels and their interaction within the BDQM Framework components are described. Each component involves process operations applied to the DQP.

Data preparation: sampling and profiling

Data preparation generates representative Big Data samples that serve as an entry for profiling, quality evaluation, and quality rules validation.

Sampling: Several sampling strategies can be applied to Big Data as surveyed in [ 54 , 55 ]. In this work, the authors evaluated the effect of sampling methods on Big Data and concluded that the sampling of large datasets reduces the run-time and computational footprint of link prediction algorithms, maintaining an adequate prediction performance. In statistics, the Bootstrap sampling technique evaluates the sampling distribution of an estimator using sampling, which replaces the original samples. In the Big Data context, Bootstrap sampling has been studied in several works [ 56 , 57 ]. In the proposed data quality evaluation scheme, it was decided to use the Bag of Little Bootstrap (BLB) [ 58 ]. This combines the results of bootstrapping multiple small subsets of a Big Data dataset. The BLB algorithm employs an original Big Dataset, which is used to generate small samples without replacements. For each generated sample, another set of samples is created by re-sampling with replacements.

Profiling: The data profiling module performs the data quality screening based on statistics and information summary [ 59 , 60 , 61 ]. Since profiling is meant to discover data characteristics from data sources, it is considered as a data assessment process that provides a first summary of the data quality reported in its data profile. Such information includes, for example, data format description, different attributes their types, values, and basic quality dimensions’ evaluations, data constraints (if any), and data ranges (max and min, a set of specific values or subsets).

More precisely, the information about the data is presented in two types: technical and functional data. This information can be extracted from the data itself without any additional representation using metadata or any descriptive header file or by parsing the data using analysis tools. This task may become very costly in Big Data. Therefore, to avoid costs generated by the data size, the same sampling process (based on BLB) is used. Thus, the data is reduced to a representative population sample, in addition to the combination of profiling results. More precisely, a data profile in the proposed framework is represented as a data quality profile of the first level ( DQP1 ), which is generated after the profiling phase. Moreover, data profiling provides some useful information that leads to significant data quality rules, usually named as data constraints. These rules are mostly equivalent to a structured-data schema, which is represented as technical and functional rules.

According to [ 61 ], there are many activities and techniques used to profile the data. These may range from online, incremental, and structural, to continuous profiling. Profiling tasks aim at discovering information about the data schema. Some data sources are already provided with their data profiles, sometimes with minimal information. In the following, some other techniques are introduced. These techniques can enrich and bring value-added information to a data profile:

Data provenance inquiry : it tracks the data origin and provides information about data transformations, data copying, and its related data quality through the data lifecycle [ 62 , 63 , 64 ].

Metadata : it provides descriptive and structural information about the data. Many data types, such as images, videos, and documents, use metadata to provide deep information about their contents. Metadata can be represented in many formats, including XML, or it can be extracted directly from the data itself without any additional representation.

Data parsing (supervised/manual/automatic) : data parsing is required since not all the data has a provenance or metadata that describes the data. The hardest way to gather extra information about the data is to parse it. Automatic parsing can be initially applied. Then, it is tuned and supervised manually by a data expert. This task may become very costly when Big Data is concerned, especially in the case of unstructured data. Consequently, a data profile is generated to represent only certain parts of the data that make sense. Therefore, multiple data profiles for multiple data partitions must be taken into consideration.

Data profile : it is generated early in the Big Data Project as DQP Level 0 (Data profile in its early form) and upgraded as a data quality profile within the data preparation component as DQP Level 1. Then, it is updated and extended through all the components of the Big Data Quality Management Framework until it reaches a DQP Level 2 . The DQP Level 8 is the profile applied to the data in the pre-processing phase with its quality rules and related activities to output a pre-processed data conformed to the quality requirements.

Exploratory quality profiling

Since a data-driven approach that uses a quantitative approach to quality dimensions’ evaluation from the data itself is followed, two evaluation steps are adopted: Quantitative Quality Evaluation based on user requirements and Exploratory Quality Profiling.

The exploratory quality profiling component is responsible for automatic data quality dimensions’ exploration without user interventions. The Quality Rules Proposals module, which produces a list of actions to elevate data quality, is based on some elementary DQDs that fit all varieties and data types.

A list of quality rules proposition, which is based on the quality evaluation of the most likely considered DQDs (e.g., completeness, accuracy, and uniqueness), is produced. This preliminary assessment is performed based on the data itself and using predefined scenarios. These scenarios are meant to increase data quality for some basic DQDs. In Fig. 7 , the steps involved in the exploratory quality profiling for quality rules proposals generation are depicted. DQP1 is extended to DQP2, after adding the Data Quality Rules Proposal ( DQRP ), which is generated by the “quality rules proposals” process.

This module is part of the DQ profiling process, which varies the DQD tolerance levels from min to max scores and applies a systematic list of predefined quality rules. These predefined rules are a set of actions applied to the data when the measured DQD scores are not in the tolerance level defined by the min, max value scores. The actions vary from deleting only attributes, discarding only observations, or a combination of both. After these actions, a re-evaluation of the new DQD scores will lead to a quality rules proposal (DQRP) with known DQD target scores after performing an analysis. In Table 7 , some examples of these predefined rules scenarios for the DQD completeness ( dqd  =  Comp ) with an execution priority for each set of grouped actions, are described. The DQD levels are set to vary from a 5% to 95% tolerance score with a granularity step of 5. They can be set differently according to the DQD choice and its sensitivity to the data model and domain. The selection of the best-proposed data quality rules is based on the KNN algorithm using Euclidean distance (Deng et al. 2016.; [ 65 ]. It gives the closest quality rules parameters that achieve (by default) high completeness with less data reduction. The process might be refined by specifying other quality parameters.

A list of quality rules proposal based on quality evaluation of the most likely considered DQD’s (e.g., completeness, accuracy, and uniqueness), is produced. This preliminary assessment is based on the data itself using predefined scenarios. The quality rules are meant to increase data quality for some basic DQD’s. In Fig.  8 , the modules involved in the exploratory quality profiling for quality rules proposals generation, are illustrated.

figure 8

Quality rules proposals with exploratory quality profiling

Quality mapping and selection

The quality mapping and selection module of the BDQM framework is responsible for mapping data features or attributes to DQD’s to target pre-required quality evaluation scores. It generates a Data Quality Evaluation Scheme ( DQES ) and then adds it (updates) to the DQP. The DQES contains the DQD’s of the appropriate attributes to be evaluated using adequate metric formulas. The DQES, as a part of DQP, contains (for each of the selected data attributes) the following list, which is considered essential for the quantitative quality evaluation:

The attributes: all or a selected list,

The data quality dimensions (DQD’s) to be evaluated for each selected attribute,

Each DQD has a metric that returns the quality score, and

The quality requirement scores for each DQD needed in the score’s validation.

These requirements are general and target many global quality levels. The mapping component acts as a refinement of the global settings with precise qualities’ goals. Therefore, a mapping must be performed between the data quality dimensions and targeted data features/attributes before proceeding with the quality assessment. Each DQD is measured for each attribute and sample. The mapping generates a DQES , which contains Quality Evaluation Requests ( QER ) Q x . Each QER Q x targets a data quality dimension (DQD) for an attribute, all attributes, or a set of selected attributes, where x is the number of requests.

Quality mapping: Many approaches are available to accomplish an efficient mapping process. These include automatic, interactive, manual, and based on quality rules proposals techniques:

Automatic : it completes the alignment and comparison of the data attributes (from DQP) with the data quality requirements (either per attribute type, or name). A set of DQDs is associated with each attribute for quality evaluation. It results in a set of associations to be executed and evaluated in the quality assessment component.

Interactive : it relies on experts’ involvement to refine, amend, or confirm the previous automated associations.

Manual : it uses a similar but advanced dashboard to that illustrated in Error! Reference source not found. and a more detailed one in the attribute level.

Quality rules proposals : the proposal list collected from the DQP2 is used to obtain an understanding of the impact of a DQD level and the data reduction ratio. These quality insights help decide which DQD is best when compared to the quality requirements.

Quality selection (of DQD, Metrics and Attributes): It consists of a selection of an appropriate quality metric to evaluate data quality dimensions for an attribute of a Big Data sample set and returns a count of correct values, which comply with the metric formula. Each metric will be computed if the attribute values reflect the DQD constraints. For example, accuracy can be defined as a count of correct attributes in a certain range of values [v 1 , v 2 ]. Similarly, it can be defined to satisfy a certain number of constraints related to the type of data such as zip code, email, social security number, dates, or addresses.

Let us define the tuple DQES (S, D, A, M) . Most of the information is provided by the BDQP(DS , DS’ , Req) with Req  =  ( D , L, A ) parameters. The profiling information is used to select the appropriate quality metrics \({{\varvec{m}}}_{{\varvec{l}}}\) to evaluate the data quality dimensions \({{\varvec{q}}}_{{\varvec{l}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) with a weight \({{\varvec{w}}}_{{\varvec{j}}}\) . In addition to the previous settings, let us consider the following: S : S ( DS , N , n, R ) \(\to\) \({{\varvec{S}}}_{{\varvec{i}}}\) a sampling strategy

Let us denote by M , a set of quality metrics \({\varvec{M}}=\left\{{{\varvec{m}}}_{1},..,{{\varvec{m}}}_{{\varvec{l}}},..,{{\varvec{m}}}_{{\varvec{d}}}\right\}\) where \({{\varvec{m}}}_{{\varvec{l}}}\) is a quality metric that measures and evaluates a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for each value of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) in the sample \({{\varvec{s}}}_{{\varvec{i}}}\) and returns 1, if correct, and 0, if not. Each \({{\varvec{m}}}_{{\varvec{l}}}\) metric will be computed if the value of the attribute reflects the \({{\varvec{q}}}_{{\varvec{l}}}\) constraint. For example, the accuracy of an attribute is defined as a range of values between 0 and 100. Otherwise, it is incorrect. If the same DQD \({{\varvec{q}}}_{{\varvec{l}}}\) is evaluated for a set of attributes, and if the weights are all equal, a simple mean is computed. The metric \({{\varvec{m}}}_{{\varvec{l}}}\) will be evaluated to measure if each attribute has its \({{\varvec{m}}}_{{\varvec{l}}}\) correct. This is performed for each instance (cell or row) of the sample \({{\varvec{s}}}_{{\varvec{i}}}\) .

Let us denote by \({{{\varvec{M}}}_{{\varvec{l}}}}^{\left(i\right)}, i=1,\dots ,{\varvec{N}}\) , a metric total \({{\varvec{m}}}_{{\varvec{l}}}\) , which evaluates and counts the number of observations that satisfy this metric, for a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) of N samples from the dataset DS . The proportion of observations under the adequacy rule is calculated by:

The proportion of observations under the adequacy rule in a sample \({{\varvec{s}}}_{{\varvec{i}}}\) is given by:

The total proportion of observations under the adequacy rule for all samples is given by:

where \({{\varvec{M}}}_{{\varvec{l}}}\) characterizes the \({{\varvec{q}}}_{{\varvec{l}}}\) mean score for the whole dataset.

Let \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) represents a request for a quality evaluation, which results in the mean quality score for a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for a measurable attribute \({{\varvec{a}}}_{{\varvec{k}}}\) calculated by M l . The process by which Big Data samples are evaluated for a DQD \({{\varvec{q}}}_{{\varvec{j}}}\) in a sample \({{\varvec{s}}}_{{\varvec{i}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) with a metric \({{\varvec{m}}}_{{\varvec{l}}}\) , providing a \({{\varvec{q}}}_{{\varvec{l}}}{{\varvec{s}}}_{{\varvec{i}}}\) score for each sample (described below in Quantitative Quality Evaluation ). Then, a sample mean \({{\varvec{q}}}_{{\varvec{l}}}\) is the final score for \({{\varvec{a}}}_{{\varvec{k}}}\) .

Let us denote a process, which sorts and combines the requests of a quality evaluation (QER) by DQD or by an attribute, resulting in a re-arrangement of the \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) tuple into two types, depending on the evaluation selection group parameter:

Per DQD identified as \({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) where AList(a z ) represents the attributes \({{\varvec{a}}}_{{\varvec{z}}}\) ( z:1…R ) to be evaluated for the DQD \({{\varvec{q}}}_{{\varvec{l}}}\) .

Per attributes identified as Q x (a k , DList( \({{\varvec{q}}}_{{\varvec{l}}}\) , m l )) , where DList( \({{\varvec{q}}}_{{\varvec{l}}}\) , m l ) represents the data quality dimensions \({{\varvec{d}}}_{{\varvec{l}}}\) ( l:1… d ) to be evaluated for the attribute \({{\varvec{a}}}_{{\varvec{k}}}\) .

In some cases, the type of combination is automatically selected for a certain DQD, such as consistency, when all the attributes are constrained towards specific conditions. The combination is either based on attributes or DQD’s, and the DQES will be constructed as follows:

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) ,…,…) or.

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) ,…,…)

The completion of the quality mapping process updates the DQP Level 2 with a DQES set as follows (Also illustrated in Error! Reference source not found.):

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) ,…,…) , where x ranges from 1 to a defined number of evaluation requests. Each Q x element is a quality evaluation request of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) for a quality dimension \({{\varvec{q}}}_{{\varvec{l}}}\) , with a DQD metric m l .

The output of this phase generates a DQES score, which contains the mean score for each DQ dimension for one or many attributes. The mapping and selection data flow initiated using Big Data quality project (BDQP) settings is illustrated in Fig.  9 . This is accomplished either using the same BDQP Req or defining more detailed and refined quality parameters and a sampling strategy. Two types of DQES can be yielded:

Data Quality Dimension-wise evaluation of a list of attributes or

Attribute-wise evaluation of many DQD’s. As described before, the quality mapping and selection component generates a DQES evaluation scheme for the dataset, identifying which DQD and attributes tuples to evaluate using a specific quality metric. Therefore, a more detailed and refined set of parameters can also be set, as described in previous sections. In the following, the steps that construct the DQES in the mapping component are depicted:

The QMS function extracts the Req parameters from BDQP as (D, L, A) .

A quality evaluation request \(\left({a}_{k},{q}_{l},{m}_{l}\right)\) , is generated from the (D, A) tuple.

A list is constructed with these quality evaluation requests.

A list sorting is performed either by DQD or by Attributes producing two types of lists:

A combination of requests per DQD generates quality requests for a set of attributes \(\left(AList\left({a}_{z}\right),{q}_{l},{m}_{l}\right)\) .

A combination of requests per attribute generates quality requests for a set of DQD’s \(\left({a}_{k},DList({q}_{l},{m}_{l})\right)\) .

A DQES is returned based on the evaluation selection group parameter (per DQD, per attribute).

figure 9

DQES parameters settings

Quantitative quality evaluation

The Authors in [ 66 ], addressed how to evaluate a set of DQDs over a set of attributes. According to this study, the evaluation of Big Data quality is applied and iterated to many samples. The aggregation and combination of DQD’s scores are performed after each iteration. The evaluation scores are added to the DQES, which results in updating the DQP. We proposed an algorithm, which computes the quality scores for a dataset based on a certain quality mapping and quality metrics.

This algorithm is based on quality metrics evaluation using scores after collecting and validating the scores with quality requirements and generating quality rules from these scores [ 66 , 67 ]. There are rules related to each pre-processing activity, such as data cleaning rules, which eliminate data, and data enrichment, which replaces or adds data. Other activities, such as data reduction, reduce the data size by decreasing the number of features or attributes that have certain characteristics such as low variance, and highly correlated features.

In this phase, all the information collected from previous components (profiling, mapping, DQES) is included in the data quality profile level 3. The important elements are the set of samples and the data quality evaluation scheme, which are executed on each sample to evaluate its quality attributes for a specific DQD.

DQP Level 3 provides all the information needed about the settings represented by the DQES to proceed with the quality evaluation. The DQES contains the following:

The selected DQDs and their related metrics.

The selected attributes with the DQD to be evaluated.

The DQD selection, which is based on the Big Data quality requirements expressed early when initiating a Big Data Quality Project.

Attributes selection is set in the quality selection mapping component (3).

The quantitative quality evaluation methodology is described as follows:

The selected DQD quality metrics will measure and evaluate the DQD for each attribute observation in each sample from the sample set. For each attribute observation, it returns a value 1, if correct, or 0, if incorrect.

Each metric will be computed if all the sample observations attribute values reflect the constraints. For example, the metric accuracy of an attribute defines that a range of values between 20 and 70 is valid. Otherwise, it is invalid. The count of correct values out of the total sample observations is the DQD ratio represented by a percentage (%). This is performed for all selected attributes and their selected DQDs.

The sample mean from all samples for each evaluated DQD represents a Data Quality Score (DQS) estimation \(\left(\overline{DQS }\right)\) of a data quality dimension of the data source.

DQP Level 4 : an update to the DQP level 3 includes a data quality evaluation scheme (DQES) with the quality scores per DQD and per attribute ( DQES  +  Scores ).

In summary, the quantitative quality evaluation starts with sampling, DQD’s and DQDs metrics selection, mapping with data attributes, quality measurements, and the sample mean DQD’s ratios.

Let us denote by \({{\varvec{Q}}}_{{\varvec{x}}}\) Score (quality score), the evaluation results of each quality evaluation request \({{\varvec{Q}}}_{{\varvec{x}}}\) in the DQES . Two types of DQES, depending on the evaluation type, which means two kind of results scores organized per DQD of all attributes or per attribute for all DQD’s, can be identified:

\({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\to\) \({{\varvec{Q}}}_{{\varvec{x}}}\) ScoreList \(\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) or.

\({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) \(\to\) Q x ScoreList \(\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right)\right)\)

where \({\varvec{z}}=1,\dots ,{\varvec{r}},\boldsymbol{ }{\varvec{r}}\) is the number of selected attributes, and \({\varvec{l}}=1,\dots ,{\varvec{d}},\) \({\varvec{d}}\) is the number of selected DQD’s.

The quality evaluation generates quality scores \({{\varvec{Q}}}_{{\varvec{x}}}\) Score . A quality scoring model is used to assess these results. It is provided in the form of quality requirements to comprehend the resulted scores, which are expressed as quality acceptance level percentages. These quality requirements might be a set of values, or an interval in which values are accepted or rejected, or a single score ratio percentage. The analysis of these scores against quality requirements leads to the discovery and generation of quality rules for attributes violating the quality requirements.

The quantitative quality evaluation process follows the steps described below for the case of the evaluation of a DQD’s list among several attributes ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) ):

N samples (of size n ) are generated from the dataset DS using a BLB-based bootstrap sampling approach.

For each sample \({{\varvec{s}}}_{{\varvec{i}}}\) generated in step 1, and

For each \({{\varvec{a}}}_{{\varvec{z}}}\) ( \({\varvec{z}}=1,\dots ,{\varvec{r}}\) ) selected attribute in DQES in step 1, evaluate all the DQD’s in the DList using their related metrics to obtain Q x ScoreList \(\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right),{{\varvec{s}}}_{{\varvec{i}}}\right)\) for each sample \({{\varvec{s}}}_{{\varvec{i}}}\) .

For all the samples scores, evaluate the sample mean of all N samples for each attribute \({{\varvec{a}}}_{{\varvec{z}}}\) related to the \({{\varvec{q}}}_{{\varvec{l}}}\) evaluation scores, as \(\stackrel{-}{{\overline{{\varvec{q}}} }_{{\varvec{z}}{\varvec{l}}}}.\)

For the dataset DS , evaluate the quality score mean \({\overline{{\varvec{q}}} }_{{\varvec{l}}}\) for each DQD for all attributes \({{\varvec{a}}}_{{\varvec{z}}}\) , as follows:

The illustration in Fig.  10 shows that the \({{\varvec{q}}}_{{\varvec{z}}{\varvec{l}}}{{\varvec{s}}}_{{\varvec{i}}}{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\) is the evaluation of DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for the sample \({{\varvec{s}}}_{{\varvec{i}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{z}}}\) with a metric m l \(\boldsymbol{ }{\overline{{\varvec{q}}} }_{{\varvec{z}}{\varvec{l}}}\) represents the quality score sample mean for the attributes \({{\varvec{a}}}_{{\varvec{z}}}\) .

figure 10

Big data sampling and quantitative quality evaluation

Quality control

The quality control is initiated when the quality evaluation results are available and reported in the DQES in DQP Level 4 . During quality control, all the quality scores with the quality requirements of the Big Data project are checked. If any detected anomalies or a non-conformance are found, the quality control component forwards a DQP Level 5 to the data quality rules discovery component.

At this point, various cases are highlighted. An iteration process is performed until the required quality levels are satisfied, or the experts decide to stop the quality evaluation process and re-evaluate their requirements. At each phase, there is a kind of quality control, even if it is not explicitly specified, within each quality process.

The quality control acts in the following cases:

Case 1: This case applies when the quality is estimated, and no rules are yet included in the DQP Level 4 (the DQP is considered as a report, since the data quality is still inspected, and only reports are generated with no actions yet to be performed).

In the case of accepted quality scores, no quality actions need to be applied to data. The DQP Level 4 remains unchanged and acts as a full data quality report, which is updated with positive validation of the data per quality requirement. However, it might include some simple pre-processing such as attribute selection and filtering. According to the data analytics requirements and expected results planned in the Big Data project, more specific data pre-processing actions are performed but not related to quality in this case.

In the case when quality scores are not accepted, the DQP Level 4 DQES scores are analyzed, and the DQP is updated with a quality error report about the related DQD scores and their data attributes. DQP Level 5 is created, and it will be analyzed by the quality rules discovery component for the pre-processing activities to be executed on the data.

Case 2: In the presence of a DQP Level 6 that contains a quality evaluation request of the pre-processed samples with discovered quality rules, the following situations may occur:

When the quality control checks that the DQP Level 6 rules are valid and satisfy the quality requirements, the DQP Level 6 is updated to DQP Level 7 and confirmed as the final data quality profile, which will be applied to the data in the pre-processing phase. DQP Level 7 is considered as important if it contains validated quality rules.

When the quality control is not totally or partially satisfied, the DQP Level 6 is sent back for an adaptation of the quality selection and mapping component with valid and invalid quality rules, quality scores, and error reports. These reports highlight with an unacceptable score interval the quality rules that have not satisfied the quality requirements. The quality selection and mapping component provide automatic or manual analysis and assessment of the unsatisfied quality rules concerning their targeted DQD’s, attributes, and quality requirements. An adaptation of quality requirements is needed to re-validate these rules. Finally, the user experts have the final word to continue or break the process and proceed to the pre-processing phase with the valid rules. As part of the framework reuse specification, the invalid rules are kept within the DQP for future re-evaluation.

Case 3: The control component will always proceed based on the quality scores and quality requirements for both input and pre-processed data. Continuous control and monitoring are responsible for initiating DQP updates and adaptation if the quality requirements are relaxed.

Quality rules, discovery, validation, optimization, and execution

In [ 67 ] work, it was reported that if the DQD scores do not conform to the quality requirements, then failed scores are used to discover data quality rules. When executed on data, these rules enhance its quality. They are based on known pre-processing activities such as data cleansing. Each activity has a set of functions targeting different types of data in order to increase its DQD ratio and the whole Data Quality (of the Data source or the Dataset(s)).

When Quality Rules ( QR) are applied to a sample set S , a pre-processed sample set S’ is generated. A quality evaluation process is invoked on S’ , generating DQD scores for S’ . Thus, a score comparison between S and S’ is conducted to filter only qualified and valid rules with a higher percentage of success among data. Then, an optimization scheme is applied to the list of valid quality rules before their application on production data. The predefined optimization schemes vary from (1) rules priority to (2) rules redundancy, (3) rules removal, (4) rules grouping per attribute, or (5) per DQD’s, or (6) per duplicate rules.

Quality rules discovery: The discovery is based on the DQP Level 5 from the quality control component. An analysis of the quality scores is initiated, and an error report is extracted. If the DQD scores do not conform to the quality requirements, then failed scores are used to discover data quality rules. When executed on data, these rules enhance its quality. They are based on known pre-processing activities such as data cleansing. Error! Reference source not found. illustrates the several modules of the discovery component from DQES DQDs scores analysis versus requirements, attributes pre-processing activities combination for each targeted DQD, and the rules generation.

For example, an attribute having a 50% score of missing data is not accepted for a required score of 20% or less. This initiates the generation of a quality rule, which consists of a data cleansing activity for observations that do not satisfy the quality requirements. The data cleansing or data enrichment activity is selected from the Big Data quality profile repository. The quality rule will target all the related attributes marked for pre-processing to reduce the 50% to 20% for the DQD completeness. Moreover, in the case of completeness, not only cleansing can be applied to missing values, but many alternatives are available for pre-processing activities. These activities are related to completeness such as missing values replacement activity with many functions for several replacements’ methods like the mean, mode, and the median.

The pre-processing activities are provided by the repository to achieve the required data quality. Many possibilities for pre-processing activities selection are available:

Automatic , by discovering and suggesting a set of activities or DQ rules.

Predefined , by selecting ready-to-use quality rules proposals from the exploratory quality profiling component, predefined pre-processing activity functions from the repository, indexed by DQDs.

Manual, giving the expert the ability to query the exploratory quality profiling results for the best rules, achieving the required quality using KNN-based filtering.

Quality rules validation: The generated quality rules from the discovery components are set in the DQP Level 6. As illustrated in Error! Reference source not found., the rules validation component process starts when the DQR list is applied to the sample set S , resulting in a pre-processed sample set S’ , which is generated by the related pre-processing activities. Then, a quality evaluation process is invoked on S’ , generating DQD scores for S’ . Thus, a score comparison between S and S’ is conducted to filter only qualified and valid rules with a higher percentage of success among data. After analyzing this score, two sets of rules are identified: successful and failed rules.

Quality rules optimization: After the set of discovered valid quality rules is selected, an optimization process is activated to reorganize and filter the rules. This is due to the nature of the evaluation parameters set in the mapping component and the refinement of the quality requirement. These choices with the rule’s validation process will produce a list of individual quality rules that, if applied as generated, might have the following consequences:

Redundant rules.

Ineffective rules due to the order of execution.

Multiple rules, which target the same DQD with the same requirements.

Multiple rules, which target the same attributes for the same DQD and requirements.

Rules, which drop attributes or rows, must be applied first or have a higher priority to avoid applying rules on data items that are meant to be dropped (Table 8 ).

The quality rules optimization component applies an optimization scheme to the list of valid quality rules before their application to production data in the pre-processing phase. The predefined optimization schemes vary according to the following, as illustrated in Error! Reference source not found.:

Rules execution priority per attribute or DQD, per pre-processing activity, or pre-processing function.

Rules redundancy removal per attributes or DQDs.

Rules grouping, combination, per activity, per attribute, per DQD’s, or duplicates.

For invalid rules, the component consists of several actions, including rules removal or rules adaptation from previously generated proposals in the exploratory quality profiling component for the same targeted tuple (attributes, DQDs).

Quality rules optimization: The Quality Rules execution consists of pre-processing data using the DQP, which embeds the data quality rules that enhance the quality to reach the agreed requirements. As part of the monitoring module, a sampling set from the pre-processed data is used to re-assess the quality and detect eventual failures.

Quality monitoring

Quality Monitoring is a continuous quality control process, which relies on the DQP. The purpose of monitoring is to validate the DQP across all the Big Data lifecycle processes. The QP repository is updated during and after the complete lifecycle as well as after the user’s feedback data, quality requirements, and mapping.

As illustrated in Fig.  11 , the monitoring process takes a scheduled snapshot of the pre-processed Big Data all along the BDQMF for the BDQ project. This data snapshot is a set of samples that have their quality evaluated in the BDQMF component (4). Then, quality control is conducted on the quality scores, and an update is performed to the DQP. The quality report may highlight the quality failure and its ratio evolution through multiple sampling snapshots of data.

figure 11

Quality monitoring component

The monitoring process strengthens and enforces the quality across the Big Data value chain using the BDQM framework while reusing the data quality profile information. For each quality monitoring iteration on the datasets from the data source, quality reports are added to the data quality profile, updating it to a DQP Level 10 .

Data processing, analytics, and visualization

This process involves the application of algorithms or methodologies, which extract insights from the ready-to-use data, with enhanced quality. Then, the value of processed data is projected visually as a dashboard and graphically enhanced charts for the decision-makers to act economically. Big Data visualization approaches are of high importance for the final exploitation of the data.

Implementations: Dataflow and quality processes development

In this section, we overview the dataflow across the various processes of the framework, we also highlight the implemented quality management processes along with the supporting application interfaces developed to support main processes. Finally, we describe the ongoing processes’ implementations and evaluations.

Framework dataflow

In Fig.  12 , we illustrate the whole process flow of the framework, from the inception of the quality project in its specification and requirements to the quality monitoring phase. As an ongoing process, monitoring is a part of the quality enforcement loop and may trigger other processes that handle several quality profile operations like DQP adaptation, upgrade, or reuse.

figure 12

Big data quality management framework data flow

In Table 9 , we enumerate and detail the multiple processes and their interactions within the BDQM Framework components including their inputs and outputs after executing related activities with the quality profile (DQP), as detailed in the previous section.

Quality management processes’ implementation

In this section, we describe the implementation of our framework's important components, processes, and their contributions towards the quality management of Big Data across its lifecycle.

Core processes implementation

As depicted above, core framework processes have been implemented and evaluated, in the following, we describe how these components have been implemented and evaluated.

Quality profiling : one of the central components of our framework is the data quality profile (DQP). Initially, the DQP implements a simple data profile of a Big Data set as an XML file (DQP Sample illustrated in Fig.  13 ).

figure 13

Example of data quality profile

After traversing several framework component’s processes, it is updated to a data quality profile. The data quality evaluation process is one of the activities that updates the DQP with quality scores that are later used to discover data quality rules. These rules, when applied to the original data, will ensure an output data set with higher quality. The DQP is finally executed by the pre-processing component. Through the end of the lifecycle, the DQP contains all pieces of information such as data quality rules that target a set of data sources with multiple datasets, data attributes and data quality dimensions such as accuracy, and pre-processing activities like data cleansing, data integration, and data normalization. The Data Quality Profile (DQP) contains all the information about the Data, its Quality, the User Quality Requirements, DQD’s, Quality Levels, Attributes, the Data Quality Evaluation Scheme (DQES), Quality Scores, and the Data Quality Rules. The DQP is stored in the DQP repository, which contains the following modules, and performs many tasks related to DQP. In the following, the DQP lifecycle and its repository are described.

Quality requirement dashboard : developed as a web-based application as shown in Fig.  14 below to capture user’s requirements and other quality information. Such requirements include for instance data quality dimension requirements specification. This application can be extended with extra information about data sources such as attributes and their types. The user is guided through the interface to specify the right attributes’ values and also given the option to upload an XML file containing the relationship between attributes. The recorded requirements are finally saved to a data quality profile level 0 which will be used in the next stage of the quality management process.

figure 14

Quality requirements dashboard

Data preparation and sampling : The framework operations start when the quality project's minimal specifications are set. It initiates and provides a data quality summary named data quality profile (DQP) by running an exploratory quality profiling assessment on data samples (using BLB sampling algorithm). This DQP is projected to be the core component of the framework and every update and every result regarding the quality is noted/recorded. The DQP is stored in a quality repository and registered in the Big Data’s provenance to keep track of data changes due to quality enhancements.

Data quality mapping and rule discovery components : data quality mapping alleviates and adds more data quality control to the whole data quality assessment process. The implemented mapping links and categorizes all the quality project required elements, from Big Data quality characteristics, pre-processing activities, and their related techniques functions, to data quality rules, dimensions, and their metrics. The Data Quality Rules’ discovery from evaluation results implementation reveals the required actions and transformations that when applied on the data set will accomplish the targeted quality level. These rules are the main ingredients of pre-processing activities. The role of a DQ rule is to undertake the sources of bad quality by defining a list of actions related to each quality score. The DQ rules are the results of systematic and planned data quality assessment analysis.

Quality profile repository (QPREPO) : Finally, our framework implements the QPREPO to manage the data quality profiles for different data types and domains and to adapt or optimize existing profiles. This repository manages the data quality dimensions with their related metrics, and the pre-processing activities, and their activity functions. A QPREPO entry is implemented for each Big Data quality project with the related DQP containing information’s about each dataset, data source, data domain, and data user. This information is essential for DQP reuse, adaptation, and enhancement for the same or different data sources.

Implemented approaches for quality assessment.

The framework uses various approaches for quality assessment: (1) Exploratory Quality Profiling; (2) a Quantitative Quality Assessment approach using DQD metrics; and it's anticipated to add a new component for (3) a Qualitative quality assessment.

Exploratory Quality Profiling implements an automatic quality evaluation that is done systematically on all data attributes for basic DQDs. The resulted in calculated scores are used to generate quality rules for each quality tolerance ratio variation. These rules are then applied to other data samples and the quality is reassessed. An analysis of the results provides an interactive quality-based rules search using several ranking algorithms (maximization, minimization, applying weight).

The Quantitative Quality Assessment implements a quick data quality evaluation strategy supported through sampling and profiling processes for Big Data. The evaluation is conducted by measuring the data quality dimensions (DQDs) for attributes using specific metrics to calculate a quality score.

The Qualitative Quality Assessment approach implements a deep quality assessment to discover hidden quality aspects and their impact on the Big Data Lifecycle outputs. These quality aspects must be quantified into scores and mapped with related attributes and DQD’s. This quantification is achieved by applying several feature selection strategies and algorithms to data samples. These qualitative insights are combined with those obtained before the quantitative quality evaluation early in the Quality management process.

Framework development, deployment, and evaluation

Development, deployment, and evaluation of our BDQMF framework follow a systematic modular approach where various components of the framework are developed and tested independently then integrated with the other components to compose the integrated solution. Most of the components are implemented in R and |Python using SparkR and PySpark libraries respectively. The supporting files like the DQP, DQES, and configuration files are written in XML and JSON formats. Big Data quality project requests and constraints including the data sources and the quality expectation are implemented within the solution where more than one module might be involved. The BDQMF components are deployed following Apache Hadoop and Spark ecosystem architecture.

The BDQMF deployed modules implementation description and developed APIs are listed in the following:

Quality setting mapper (QSP): it implements an interface for automatic selection and mapping of DQD’s and dataset attributes from the initial DQP.

Quality settings parser (QSP): responsible for parsing and loading parameters to the execution environment from DQP settings to data files. It is also used to extract quality rules and scores from the DQES in the DQP.

Data loader (DL): implements filtering, selecting, and loading all types of data files required by the BDQMF including datasets from data sources into the Spark environment (e.g. DataFrames, tables), it will be used by various processes or it will persist in the database for further reuse. For data selection the uses SQL to retrieve only attributes being set in the DQP settings.

Data samples generator (DSG): it generates data samples from multiple data sources.

Quality inspector and profiler (QIP): it is responsible for all qualitative and quantitative quality evaluations among data samples for all the BDQMF lifecycle phases. The inspector assesses all the default and required DQD’s, and all quality evaluations are set into the DQES within the DQP file.

Preprocessing activities and functions execution engine (PPAF-E ): all the repository preprocessing activities along with their related functions are implemented as APIs in python and R. When requested this library will load the necessary methods and execute them within the preprocessing activities for rules validation and rules execution in phase 9.

Quality rules manager (QRM): it is one of the important modules of the framework. It implements and deliver the following features:

Analyzes Quality results

Discovers and generates Quality rules proposals.

Quality rules validation among requirements settings.

Quality rules refinement and optimizations

Quality rules ACID operations in the DQP files and the repository.

Quality monitor (QM) : it is responsible for monitoring, triggering, and reporting any quality change all over the Big Data lifecycle to assure the efficiency of quality improvement of the discovered data quality rules.

BDQMF-Repo: is the repository where all the quality-related files, settings, requirements, results are stored. The repo is using HBase or Mongo DB to fulfill requirements of the Big Data ecosystem environments and scalability for intensive data updates.

Big data quality has attracted the attention of researchers regarding Big Data as it is considered the key differentiator, which leads to high-quality insights and data-driven decisions. In this paper, a Big Data Quality Management Framework for addressing end-to-end Quality in the Big Data lifecycle was proposed. The framework is based on a Data Quality Profile, which is augmented with valuable information while traveling across different stages of the framework, starting from Big Data project parameters, quality requirements, quality profiling, and quality rules proposals. The exploratory quality profiling feature, which extracts quality information from the data, helped in building a robust DQP with a quality rules proposal and a step over for the configuration of the data quality evaluation scheme. Moreover, the extracted quality rules proposals are of high benefit for the quality dimensions mapping and attribute selection component. This fact supports the users with quality data indicators characterized by their profile.

The framework dataflow shows that any Big Data set quality is evaluated through the exploratory quality profiling component and the quality rules extraction and validation towards an improvement in its quality. It is of great importance to ensure the right selection of a combination of targeted DQD levels, observations (rows), and attributes (columns) for efficient quality results, while not sacrificing vital data because of considering only one DQD. The resulted quality profile based on the quality assessment results confirms that the contained quality information significantly improves the quality of Big Data.

In future work, we plan to extend the quantitative quality profiling with qualitative evaluation. We also plan to extend the framework to cope with unstructured Big Data quality assessment.

Availability of data and materials

Data used in this work is available with the first author and can be provided up on request. The data includes sampling data, pre-processed data, etc.

Chen M, Mao S, Liu Y. Big data: A survey. Mobile Netw Appl. 2014;19:171–209. https://doi.org/10.1007/s11036-013-0489-0 .

Article   Google Scholar  

Chiang F, Miller RJ. Discovering data quality rules. Proceed VLDB Endowment. 2008;1:1166–77.

Yeh, P.Z., Puri, C.A., 2010. An Efficient and Robust Approach for Discovering Data Quality Rules, in: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI). Presented at the 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 248–255. https://doi.org/10.1109/ICTAI.2010.43

Ciancarini, P., Poggi, F., Russo, D., 2016. Big Data Quality: A Roadmap for Open Data, in: 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService). Presented at the 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService), pp. 210–215. https://doi.org/10.1109/BigDataService.2016.37

Firmani D, Mecella M, Scannapieco M, Batini C. On the meaningfulness of “big data quality” (Invited Paper). Data Sci Eng. 2016;1:6–20. https://doi.org/10.1007/s41019-015-0004-7 .

Rivas, B., Merino, J., Serrano, M., Caballero, I., Piattini, M., 2015. I8K|DQ-BigData: I8K Architecture Extension for Data Quality in Big Data, in: Advances in Conceptual Modeling, Lecture Notes in Computer Science. Presented at the International Conference on Conceptual Modeling, Springer, Cham, pp. 164–172. https://doi.org/10.1007/978-3-319-25747-1_17

Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., Byers, A.H., 2011. Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute 1–137.

Chen CP, Zhang C-Y. Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Inf Sci. 2014;275:314–47.

Hashem IAT, Yaqoob I, Anuar NB, Mokhtar S, Gani A, Ullah Khan S. The rise of “big data” on cloud computing: Review and open research issues. Inf Syst. 2015;47:98–115. https://doi.org/10.1016/j.is.2014.07.006 .

Hu H, Wen Y, Chua T-S, Li X. Toward scalable systems for big data analytics: a technology tutorial. IEEE Access. 2014;2:652–87. https://doi.org/10.1109/ACCESS.2014.2332453 .

Wielki J. The Opportunities and Challenges Connected with Implementation of the Big Data Concept. In: Mach-Król M, Olszak CM, Pełech-Pilichowski T, editors. Advances in ICT for Business. Springer International Publishing: Industry and Public Sector, Studies in Computational Intelligence; 2015. p. 171–89.

Google Scholar  

Ali-ud-din Khan, M., Uddin, M.F., Gupta, N., 2014. Seven V’s of Big Data understanding Big Data to extract value, in: American Society for Engineering Education (ASEE Zone 1), 2014 Zone 1 Conference of The. Presented at the American Society for Engineering Education (ASEE Zone 1), 2014 Zone 1 Conference of the, pp. 1–5. https://doi.org/10.1109/ASEEZone1.2014.6820689

Kepner, J., Gadepally, V., Michaleas, P., Schear, N., Varia, M., Yerukhimovich, A., Cunningham, R.K., 2014. Computing on masked data: a high performance method for improving big data veracity, in: 2014 IEEE High Performance Extreme Computing Conference (HPEC). Presented at the 2014 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–6. https://doi.org/10.1109/HPEC.2014.7040946

Saha, B., Srivastava, D., 2014. Data quality: The other face of Big Data, in: 2014 IEEE 30th International Conference on Data Engineering (ICDE). Presented at the 2014 IEEE 30th International Conference on Data Engineering (ICDE), pp. 1294–1297. https://doi.org/10.1109/ICDE.2014.6816764

Gandomi A, Haider M. Beyond the hype: Big data concepts, methods, and analytics. Int J Inf Manage. 2015;35:137–44.

Pääkkönen P, Pakkala D. Reference architecture and classification of technologies, products and services for big data systems. Big Data Research. 2015;2:166–86. https://doi.org/10.1016/j.bdr.2015.01.001 .

Oliveira, P., Rodrigues, F., Henriques, P.R., 2005. A Formal Definition of Data Quality Problems., in: IQ.

Maier, M., Serebrenik, A., Vanderfeesten, I.T.P., 2013. Towards a Big Data Reference Architecture. University of Eindhoven.

Caballero, I., Piattini, M., 2003. CALDEA: a data quality model based on maturity levels, in: Third International Conference on Quality Software, 2003. Proceedings. Presented at the Third International Conference on Quality Software, 2003. Proceedings, pp. 380–387. https://doi.org/10.1109/QSIC.2003.1319125

Sidi, F., Shariat Panahy, P.H., Affendey, L.S., Jabar, M.A., Ibrahim, H., Mustapha, A., 2012. Data quality: A survey of data quality dimensions, in: 2012 International Conference on Information Retrieval Knowledge Management (CAMP). Presented at the 2012 International Conference on Information Retrieval Knowledge Management (CAMP), pp. 300–304. https://doi.org/10.1109/InfRKM.2012.6204995

Chen, M., Song, M., Han, J., Haihong, E., 2012. Survey on data quality, in: 2012 World Congress on Information and Communication Technologies (WICT). Presented at the 2012 World Congress on Information and Communication Technologies (WICT), pp. 1009–1013. https://doi.org/10.1109/WICT.2012.6409222

Batini C, Cappiello C, Francalanci C, Maurino A. Methodologies for data quality assessment and improvement. ACM Comput Surv. 2009;41:1–52. https://doi.org/10.1145/1541880.1541883 .

Glowalla, P., Balazy, P., Basten, D., Sunyaev, A., 2014. Process-Driven Data Quality Management–An Application of the Combined Conceptual Life Cycle Model, in: 2014 47th Hawaii International Conference on System Sciences (HICSS). Presented at the 2014 47th Hawaii International Conference on System Sciences (HICSS), pp. 4700–4709. https://doi.org/10.1109/HICSS.2014.575

Wand Y, Wang RY. Anchoring data quality dimensions in ontological foundations. Commun ACM. 1996;39:86–95. https://doi.org/10.1145/240455.240479 .

Wang, R.Y., Strong, D.M., 1996. Beyond accuracy: What data quality means to data consumers. Journal of management information systems 5–33.

Cappiello, C., Caro, A., Rodriguez, A., Caballero, I., 2013. An Approach To Design Business Processes Addressing Data Quality Issues.

Hazen BT, Boone CA, Ezell JD, Jones-Farmer LA. Data quality for data science, predictive analytics, and big data in supply chain management: An introduction to the problem and suggestions for research and applications. Int J Prod Econ. 2014;154:72–80. https://doi.org/10.1016/j.ijpe.2014.04.018 .

Caballero, I., Verbo, E., Calero, C., Piattini, M., 2007. A Data Quality Measurement Information Model Based On ISO/IEC 15939., in: ICIQ. pp. 393–408.

Juddoo, S., 2015. Overview of data quality challenges in the context of Big Data, in: 2015 International Conference on Computing, Communication and Security (ICCCS). Presented at the 2015 International Conference on Computing, Communication and Security (ICCCS), pp. 1–9. https://doi.org/10.1109/CCCS.2015.7374131

Woodall P, Borek A, Parlikad AK. Data quality assessment: The hybrid approach. Inf Manage. 2013;50:369–82. https://doi.org/10.1016/j.im.2013.05.009 .

Goasdoué, V., Nugier, S., Duquennoy, D., Laboisse, B., 2007. An Evaluation Framework For Data Quality Tools., in: ICIQ. pp. 280–294.

Caballero, I., Serrano, M., Piattini, M., 2014. A Data Quality in Use Model for Big Data, in: Indulska, M., Purao, S. (Eds.), Advances in Conceptual Modeling, Lecture Notes in Computer Science. Springer International Publishing, pp. 65–74. https://doi.org/10.1007/978-3-319-12256-4_7

Cai L, Zhu Y. The challenges of data quality and data quality assessment in the big data era. Data Sci J. 2015. https://doi.org/10.5334/dsj-2015-002 .

Philip Woodall, A.B., 2014. An Investigation of How Data Quality is Affected by Dataset Size in the Context of Big Data Analytics.

Laranjeiro, N., Soydemir, S.N., Bernardino, J., 2015. A Survey on Data Quality: Classifying Poor Data, in: 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC). Presented at the 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 179–188. https://doi.org/10.1109/PRDC.2015.41

Liu, J., Li, J., Li, W., Wu, J., 2016. Rethinking big data: A review on the data quality and usage issues. ISPRS Journal of Photogrammetry and Remote Sensing, Theme issue “State-of-the-art in photogrammetry, remote sensing and spatial information science” 115, 134–142. https://doi.org/10.1016/j.isprsjprs.2015.11.006

Rao, D., Gudivada, V.N., Raghavan, V.V., 2015. Data quality issues in big data, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), pp. 2654–2660. https://doi.org/10.1109/BigData.2015.7364065

Zhou, H., Lou, J.G., Zhang, H., Lin, H., Lin, H., Qin, T., 2015. An Empirical Study on Quality Issues of Production Big Data Platform, in: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE). Presented at the 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE), pp. 17–26. https://doi.org/10.1109/ICSE.2015.130

Becker, D., King, T.D., McMullen, B., 2015. Big data, big data quality problem, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), IEEE, Santa Clara, CA, USA, pp. 2644–2653. https://doi.org/10.1109/BigData.2015.7364064

Maślankowski, J., 2014. Data Quality Issues Concerning Statistical Data Gathering Supported by Big Data Technology, in: Kozielski, S., Mrozek, D., Kasprowski, P., Małysiak-Mrozek, B., Kostrzewa, D. (Eds.), Beyond Databases, Architectures, and Structures, Communications in Computer and Information Science. Springer International Publishing, pp. 92–101. https://doi.org/10.1007/978-3-319-06932-6_10

Fürber, C., Hepp, M., 2011. Towards a Vocabulary for Data Quality Management in Semantic Web Architectures, in: Proceedings of the 1st International Workshop on Linked Web Data Management, LWDM ’11. ACM, New York, NY, USA, pp. 1–8. https://doi.org/10.1145/1966901.1966903

Corrales DC, Corrales JC, Ledezma A. How to address the data quality issues in regression models: a guided process for data cleaning. Symmetry. 2018;10:99.

Fan, W., 2008. Dependencies revisited for improving data quality, in: Proceedings of the Twenty-Seventh ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. ACM, pp. 159–170.

Kläs, M., Putz, W., Lutz, T., 2016. Quality Evaluation for Big Data: A Scalable Assessment Approach and First Evaluation Results, in: 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA). Presented at the 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), pp. 115–124. https://doi.org/10.1109/IWSM-Mensura.2016.026

Rahm E, Do HH. Data cleaning: Problems and current approaches. IEEE Data Eng Bull. 2000;23:3–13.

Dallachiesa, M., Ebaid, A., Eldawy, A., Elmagarmid, A., Ilyas, I.F., Ouzzani, M., Tang, N., 2013. NADEEF: A Commodity Data Cleaning System, in: Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, SIGMOD ’13. ACM, New York, NY, USA, pp. 541–552. https://doi.org/10.1145/2463676.2465327

Ebaid A, Elmagarmid A, Ilyas IF, Ouzzani M, Quiane-Ruiz J-A, Tang N, Yin S. NADEEF: A generalized data cleaning system. Proceed VLDB Endowment. 2013;6:1218–21.

Elmagarmid, A., Ilyas, I.F., Ouzzani, M., Quiané-Ruiz, J.-A., Tang, N., Yin, S., 2014. NADEEF/ER: generic and interactive entity resolution. ACM Press, pp. 1071–1074. https://doi.org/10.1145/2588555.2594511

Tang N. Big Data Cleaning. In: Chen L, Jia Y, Sellis T, Liu G, editors. Web Technologies and Applications. Lecture Notes in Computer Science: Springer International Publishing; 2014. p. 13–24.

Chapter   Google Scholar  

Ge M, Dohnal V. Quality management in big data informatics. 2018;5:19. https://doi.org/10.3390/informatics5020019 .

Jimenez-Marquez JL, Gonzalez-Carrasco I, Lopez-Cuadrado JL, Ruiz-Mezcua B. Towards a big data framework for analyzing social media content. Int J Inf Manage. 2019;44:1–12. https://doi.org/10.1016/j.ijinfomgt.2018.09.003 .

Siddiqa A, Hashem IAT, Yaqoob I, Marjani M, Shamshirband S, Gani A, Nasaruddin F. A survey of big data management: Taxonomy and state-of-the-art. J Netw Comput Appl. 2016;71:151–66. https://doi.org/10.1016/j.jnca.2016.04.008 .

Taleb, I., Dssouli, R., Serhani, M.A., 2015. Big Data Pre-processing: A Quality Framework, in: 2015 IEEE International Congress on Big Data (BigData Congress). Presented at the 2015 IEEE International Congress on Big Data (BigData Congress), pp. 191–198. https://doi.org/10.1109/BigDataCongress.2015.35

Cormode, G., Duffield, N., 2014. Sampling for Big Data: A Tutorial, in: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14. ACM, New York, NY, USA, pp. 1975–1975. https://doi.org/10.1145/2623330.2630811

Gadepally, V., Herr, T., Johnson, L., Milechin, L., Milosavljevic, M., Miller, B.A., 2015. Sampling operations on big data, in: 2015 49th Asilomar Conference on Signals, Systems and Computers. Presented at the 2015 49th Asilomar Conference on Signals, Systems and Computers, pp. 1515–1519. https://doi.org/10.1109/ACSSC.2015.7421398

Liang F, Kim J, Song Q. A bootstrap metropolis-hastings algorithm for bayesian analysis of big data. Technometrics. 2016. https://doi.org/10.1080/00401706.2016.1142905 .

Article   MathSciNet   Google Scholar  

Satyanarayana, A., 2014. Intelligent sampling for big data using bootstrap sampling and chebyshev inequality, in: 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). Presented at the 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), IEEE, Toronto, ON, Canada, pp. 1–6. https://doi.org/10.1109/CCECE.2014.6901029

Kleiner, A., Talwalkar, A., Sarkar, P., Jordan, M., 2012. The big data bootstrap. arXiv preprint

Dai, W., Wardlaw, I., Cui, Y., Mehdi, K., Li, Y., Long, J., 2016. Data Profiling Technology of Data Governance Regarding Big Data: Review and Rethinking, in: Latifi, S. (Ed.), Information Technolog: New Generations. Springer International Publishing, Cham, pp. 439–450. https://doi.org/10.1007/978-3-319-32467-8_39

Loshin, D., 2010. Rapid Data Quality Assessment Using Data Profiling 15.

Naumann F. Data profiling revisited. ACM. SIGMOD Record. 2014;42:40–9.

Buneman, P., Davidson, S.B., 2010. Data provenance–the foundation of data quality.

Glavic, B., 2014. Big Data Provenance: Challenges and Implications for Benchmarking, in: Specifying Big Data Benchmarks. Springer, pp. 72–80.

Wang, J., Crawl, D., Purawat, S., Nguyen, M., Altintas, I., 2015. Big data provenance: Challenges, state of the art and opportunities, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), pp. 2509–2516. https://doi.org/10.1109/BigData.2015.7364047

Hwang W-J, Wen K-W. Fast kNN classification algorithm based on partial distance search. Electron Lett. 1998;34:2062–3.

Taleb, I., Kassabi, H.T.E., Serhani, M.A., Dssouli, R., Bouhaddioui, C., 2016. Big Data Quality: A Quality Dimensions Evaluation, in: 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld). Presented at the 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), pp. 759–765. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0122

Taleb, I., Serhani, M.A., 2017. Big Data Pre-Processing: Closing the Data Quality Enforcement Loop, in: 2017 IEEE International Congress on Big Data (BigData Congress). Presented at the 2017 IEEE International Congress on Big Data (BigData Congress), pp. 498–501. https://doi.org/10.1109/BigDataCongress.2017.73

Deng, Z., Zhu, X., Cheng, D., Zong, M., Zhang, S., n.d. Efficient kNN classification algorithm for big data. Neurocomputing. https://doi.org/10.1016/j.neucom.2015.08.112

Firmani, D., Mecella, M., Scannapieco, M., Batini, C., 2015. On the Meaningfulness of “Big Data Quality” (Invited Paper), in: Data Science and Engineering. Springer Berlin Heidelberg, pp. 1–15. https://doi.org/10.1007/s41019-015-0004-7

Lee YW. Crafting rules: context-reflective data quality problem solving. J Manag Inf Syst. 2003;20:93–119.

Download references

Acknowledgements

Not applicable.

This work is supported by fund #12R005 from ZCHS at UAE University.

Author information

Authors and affiliations.

College of Technological Innovation, Zayed University, P.O. Box 144534, Abu Dhabi, United Arab Emirates

Ikbal Taleb

College of Information Technology, UAE University, P.O. Box 15551, Al Ain, United Arab Emirates

Mohamed Adel Serhani

Department of Statistics, College of Business and Economics, UAE University, P.O. Box 15551, Al Ain, United Arab Emirates

Chafik Bouhaddioui

Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, H4B 1R6, Canada

Rachida Dssouli

You can also search for this author in PubMed   Google Scholar

Contributions

IT conceived the main conceptual ideas related to Big data quality framework and proof outline. He designed the framework and their main modules, he also worked on the implementation and validation of some of the framework’s components. MAS supervised the study and was in charge of direction and planning, he also contributed to couple of sections including the introduction, abstract, the framework and the implementation and conclusion section. CB contributed to data preparation sampling and profiling, he also reviewed and validated all formulations and statistical modeling included in this work. RD contributed in the review and discussion of the core contributions and their validation. All authors read and approved the final manuscript.

Authors’ information

Dr. Ikbal Taleb is currently an Assistant Professor, College of Technological Information, Zayed University, Abu Dhabi, U.A.E. He got his Ph.D. in information and systems engineering from Concordia University in 2019, and MSc. in Software Engineering from the University of Montreal, Canada in 2006. His research interests include data and Big data quality, quality profiling, quality assessment, cloud computing, web services, and mobile web services.

Prof. M. Adel Serhani is currently a Professor, and Assistant Dean for Research and Graduate Studies College of Information Technology, U.A.E University, Al Ain, U.A.E. He is also an Adjunct faculty in CIISE, Concordia University, Canada. He holds a Ph.D. in Computer Engineering from Concordia University in 2006, and MSc. in Software Engineering from University of Montreal, Canada in 2002. His research interests include: Cloud for data intensive e-health applications, and services; SLA enforcement in Cloud Data centers, and Big data value chain, Cloud federation and monitoring, Non-invasive Smart health monitoring; management of communities of Web services; and Web services applications and security. He has a large experience earned throughout his involvement and management of different R&D projects. He served on several organizing and Technical Program Committees and he was the program Co-Chair of International Conference in Web Services (ICWS’2020), Co-chair of the IEEE conference on Innovations in Information Technology (IIT´13), Chair of IEEE Workshop on Web service (IWCMC´13), Chair of IEEE workshop on Web, Mobile, and Cloud Services (IWCMC´12), and Co-chair of International Workshop on Wireless Sensor Networks and their Applications (NDT´12). He has published around 130 refereed publications including conferences, journals, a book, and book chapters.

Dr. Chafik Bouhaddioui is an Associate Professor of Statistics in the College of Business and Economics at UAE University. He got his Ph.D. from University of Montreal in Canada. He worked as lecturer at Concordia University for 4 years. He has a rich experience in applied statistics in finance in private and public sectors. He worked as assistant researcher in Finance Ministry in Canada. He worked as Senior Analyst in National Bank of Canada and developed statistical methods used in stock market forecasting. He joined in 2004 a team of researchers in finance group at CIRANO in Canada to develop statistical tools and modules in finance and risk analysis. He published several papers in well-known journals in multivariate time series analysis and their applications in economics and finance. His area of research is diversified and includes modeling and prediction in multivariate time series, causality and independence tests, biostatistics, and Big Data.

Prof. Rachida Dssouli is a full professor and Director of Concordia Institute for Information Systems Engineering, Faculty of Engineering and Computer Science, Concordia University. Dr. Dssouli received a Master (1978), Diplome d'études Approfondies (1979), Doctorat de 3eme Cycle in Networking (1981) from Université Paul Sabatier, Toulouse, France. She earned her PhD degree in Computer Science (1987) from Université de Montréal, Canada. Her research interests are in Communication Software Engineering a sub discipline of Software Engineering. Her contributions are in Testing based on Formal Methods, Requirements Engineering, Systems Engineering, Telecommunication Service Engineering and Quality of Service. She published more than 200 papers in journals and referred conferences in her area of research. She supervised/ co-supervised more than 50 graduate students among them 20 PhD students. Dr. Dssouli is the founding Director of Concordia Institute for Information and Systems Engineering (CIISE) June 2002. The Institute hosts now more than 550 graduate students and 20 faculty members, 4 master programs, and a PhD program.

Corresponding author

Correspondence to Mohamed Adel Serhani .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Taleb, I., Serhani, M.A., Bouhaddioui, C. et al. Big data quality framework: a holistic approach to continuous quality management. J Big Data 8 , 76 (2021). https://doi.org/10.1186/s40537-021-00468-0

Download citation

Received : 06 February 2021

Accepted : 15 May 2021

Published : 29 May 2021

DOI : https://doi.org/10.1186/s40537-021-00468-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Big data quality
  • Quality assessment
  • Quality metrics and scores
  • Pre-processing

big data in research

  • Faculty and Staff News
  • Media Resources
  • Purdue News Weekly
  • Research Excellence
  • Purdue Computes
  • Daniels School of Business
  • Purdue University in Indianapolis
  • The Persistent Pursuit
  • Purdue News on Youtube
  • Purdue in the News
  • Purdue University Events

Purdue’s online data science master’s addresses burgeoning demand for trained data scientists

The interdisciplinary degree is accessible for working professionals from both technical and nontechnical backgrounds

A digital display superimposed on fingers typing on a keyboard. On the right, the words online master’s in data science.

WEST LAFAYETTE, Ind. — Data scientists who can make sense of today’s epic floods of data to generate actionable insights and communicate them to a variety of audiences are in demand in almost any field, from retail business and industry to health care, government, education, and more.

The U.S. Bureau of Labor Statistics estimates that jobs for data scientists will grow 36% by 2031. Nationally, there were nearly 125,000 data scientist jobs added from 2013-2023. Yet many of those jobs — with many more openings coming — went unfilled for a lack of trained data scientists. The bottom line: Nearly every industry today requires data scientists, and the number of these positions is expected to grow.

Purdue University’s new 100% online Master of Science in data science degree addresses the need and the high demand for a trained data science workforce that can harness the power of data to drive innovation, efficiency and competitiveness. The interdisciplinary master’s program is designed for working professionals with a technical background but includes a pathway to entry for professionals from nontechnical fields.

“This data science master’s program is specifically designed for online delivery and optimal online learning, making it accessible to professionals around the world,” said Dimitrios Peroulis, Purdue senior vice president for partnerships and online. “The interdisciplinary curriculum is diverse, customizable to a student’s needs and tailored for practical application immediately.”

Purdue’s online master’s in data science features core courses covering foundations of data science, machine learning and data mining, big data technologies and tools, data analysis, and data visualization and communication.

Students do a capstone project pairing them with an industry mentor and a collaborative team to manage a data science project from inception to completion. That includes developing project timelines, allocating resources and adapting strategies based on the project’s evolution. The capstone, modeled after curriculum from The Data Mine , Purdue’s award-winning data science learning community, is an opportunity to apply knowledge acquired throughout the master’s program to solve complex, real-world problems.

The online master’s program also features the opportunity to earn industry-aligned certificates along the way to earning a master’s degree. Options include education, leadership, and policy; smart mobility and smart transportation; data science in finance; spatial data science; geospatial information science; managing information technology projects; IT business analysis; and applied statistics.

The program was developed by an interdisciplinary cohort of expert faculty from Purdue’s flagship campus, including the colleges of Agriculture, Education, Engineering, Health and Human Sciences, Liberal Arts, Pharmacy, Science, and Veterinary Medicine, along with the Mitch Daniels School of Business, the Purdue Polytechnic Institute, the Purdue Libraries, and the Office of the Vice Provost for Graduate Students and Postdoctoral Scholars.

“Purdue’s new online MS in data science program leverages the real-world experience of faculty working across several distinct disciplines,” said Timothy Keaton, assistant professor of practice in Purdue’s Department of Statistics, who was involved in developing the new degree. “This cooperation between experts in the application of data science in diverse fields provides a great opportunity to create engaging and meaningful coursework that incorporates many different potential areas of interest for our students.”

Students will develop expertise in programming languages, gaining the ability to design and implement data-driven solutions; learn to apply advanced technologies, including cloud computing and big data frameworks, to effectively handle and process large-scale datasets; gain a deep understanding of machine learning algorithms and models, applying them to real-world scenarios; and become proficient in collecting, cleaning, and analyzing diverse datasets.

The curriculum also is designed to teach learners data visualization and communication methods for creating compelling visual representations of complex data to effectively convey insights, along with the application of storytelling techniques to communicate findings clearly to both technical and nontechnical audiences. The program covers adherence to ethical standards in data science, privacy, transparency and fairness as well.

The program draws on Purdue’s expertise in myriad aspects of data science. Known for its emphasis on practical programs with proven value, Purdue has been rated among the Top 10 Most Innovative Schools for six years running by U.S. News & World Report and is the No. 8 public university in the U.S. according to the latest QS World University Rankings.

“The breadth and depth of topics that data science encompasses necessitate graduate programs that incorporate expertise from a variety of disciplines and then integrate this into a curriculum to meet the needs of its students,” said John Springer, a Purdue computer and information technology professor who was involved in developing the new degree. “Purdue’s unique approach to the development and delivery of its new online master’s program wholly fulfills these requirements by utilizing a highly interdisciplinary team of Purdue faculty backed by Purdue’s outstanding team of instructional designers.”

For more information about Purdue’s 100% online Master of Science in data science degree, visit the program website .

About Purdue University

Purdue University is a public research institution demonstrating excellence at scale. Ranked among top 10 public universities and with two colleges in the top four in the United States, Purdue discovers and disseminates knowledge with a quality and at a scale second to none. More than 105,000 students study at Purdue across modalities and locations, including nearly 50,000 in person on the West Lafayette campus. Committed to affordability and accessibility, Purdue’s main campus has frozen tuition 13 years in a row. See how Purdue never stops in the persistent pursuit of the next giant leap — including its first comprehensive urban campus in Indianapolis, the Mitch Daniels School of Business, Purdue Computes and the One Health initiative — at https://www.purdue.edu/president/strategic-initiatives .

Media contact: Brian Huchel, [email protected]

More Purdue News

Students walking past the Engineering Fountain at sunrise

Today’s top 5 from Purdue University

August 30, 2024

Several ewes eating dhurrin-free sorghum plants.

Researchers document animals’ preference for Purdue-patented sorghum technology

August 29, 2024

Purdue President Mung Chiang stands with the ambassador of Panama to the U.S. Each holds a document.

Purdue, Panama enter agreement to support semiconductor academic collaboration and workforce development

big data in research

Purdue alum, U.S. Olympic & Paralympic Committee executive Julie Dussliere named president and chief executive officer of Purdue for Life Foundation

August 26, 2024

  • Open access
  • Published: 28 August 2024

Transforming simulation in healthcare to enhance interprofessional collaboration leveraging big data analytics and artificial intelligence

  • Salman Yousuf Guraya 1  

BMC Medical Education volume  24 , Article number:  941 ( 2024 ) Cite this article

10 Altmetric

Metrics details

Simulation in healthcare, empowered by big data analytics and artificial intelligence (AI), has the potential to drive transformative innovations towards enhanced interprofessional collaboration (IPC). This convergence of technologies revolutionizes medical education, offering healthcare professionals (HCPs) an immersive, iterative, and dynamic simulation platform for hands-on learning and deliberate practice. Big data analytics, integrated in modern simulators, creates realistic clinical scenarios which mimics real-world complexities. This optimization of skill acquisition and decision-making with personalized feedback leads to life-long learning. Beyond clinical training, simulation-based AI, virtual reality (VR), and augmented reality (AR) automated tools offer avenues for quality improvement, research and innovation, and team working. Additionally, the integration of VR and AR enhances simulation experience by providing realistic environments for practicing high-risk procedures and personalized learning. IPC, crucial for patient safety and quality care, finds a natural home in simulation-based education, fostering teamwork, communication, and shared decision-making among diverse HCP teams. A thoughtful integration of simulation-based medical education into curricula requires overcoming its barriers such as professional silos and stereo-typing. There is a need for a cautious implantation of technology in clinical training without overly ignoring the real patient-based medical education.

Peer Review reports

Simulation in healthcare, powered by big data analytics (BDA) and artificial intelligence (AI), stands at the forefront of transformative innovations with a promise to facilitating interprofessional collaboration (IPC). This convergence of technologies towards educational philosophies not only revolutionizes medical training but also enhances the quality of care and patient safety in an IPC climate for an efficient delivery of healthcare system [ 1 ]. Simulation in healthcare showcases a controlled, versatile, and safe environment for healthcare professionals (HCPs) from diverse disciplines to engage in hands-on learning with deliberate practice [ 2 ]. Learners are engrossed in immersive, iterative, and interactive climate which can nurture opportunities for the acquisition of transferable psychomotor and cognition-based skills [ 3 ]. A simulated environment nurtures the real jest of life-long learning where learners can be trained by deliberate practice till the acquisition of their skills.

BDA, embedded in modern cutting-edge simulators, can utilize enormous healthcare data for clinical training and skills acquistion [ 4 ]. For instance, Bateman and Wood employed Amazon’s Web Service to accumulate a complete human genomic scaffold including 140 million individual base pairs by adopting an advanced hashing algorithm [ 5 ]. Later, a BDA platform successfully matched patients’ data of children in hospital to their whole-genome sequencing for the management of potentially incurable clinical conditions [ 6 ]. From another perspective, leveraging clinical scenarios with realism, BDA can be a valuable tool in reflecting the complexities of the real-world medical practice. This data-driven approach diligently mimics the variability and inconsistency encountered in real clinical settings, preparing HCPs for diverse patient encounters and crisis management. Artificial intelligence (AI) with its machine learning algorithm (MLA) and natural language processing (NLP) further fortifies the impact of simulation by enabling adaptive learning experiences [ 7 ]. Moreover, AI-powered patient simulators with automated interfaces can demonstrate high fidelity realistic physiological responses such as pulse, blood pressure, breathing patterns, and facial expressions to allow learners to practice decision-making in lifelike scenarios. By analyzing simulation data, institutions can identify trends, best practices, and areas for improvement, ultimately enhancing patient outcomes and advancing medical knowledge.

Applications of BDA harness the experimental usage of electronic health records, medical imaging, genetic information, and patients’ demographics. By aggregating and analyzing this data, simulation platforms can create realistic scenarios that can be used by learners for clinical reasoning and critical decision-making. Additionally, MLA and NLP have the ability to predict disease prognosis, treatment efficacy, and unwanted outcomes, thereby offering a reliable hub for interactive and immersive learning for HCPs [ 8 ]. MLA and NLP encourage adaptive learning experiences by analyzing learner interactions and performance in real-time. This unique opportunity of acquiring skills mastery with personalized feedback either by simulator, peer, or facilitator makes simulation a master-class educational and training tool for all HCPs. For instance, if a learner consistently makes errors in decision-making or a procedural skill, a smart simulator can tailor further exercises to provide targeted practice opportunities for individual learners.

Clinical training is interposed at the crossroads of adopting AI, virtual reality (VR), and augmented reality (AR) technologies. Beyond training, simulation-driven medical education holds immense potential for quality improvement and research in healthcare [ 9 ]. VR and AR technologies offer immersive experiences that simulate clinical settings with unprecedented realism. VR headsets transform learners into a cyber space where they deal with animations, digital images, and a host of other exercises in virtual climate [ 10 ]. AR overlays digital information onto the physical world, allowing learners to visualize anatomical structures, medical procedures, or patient data in real-time. Moreover, VR and AR can be used to perform high risk medical procedures till the complete acquisition of skill mastery. Such opportunity is not possible due to threats to patient safety and limited time for learners’ training in real-world workplaces [ 11 ]. At the same time, the mapping of learners’ needs with the curriculum is possible only in simulated environment where learners’ expectations can be tailored to meet their learning styles [ 11 ]. AI, VR, and AR technologies in healthcare simulators essentially empower learners to develop clinical expertise, enhance patient care, and drive innovations in healthcare delivery.

An example of integration of AI, NP, ML, and certain other algorithms in simulation is the sepsis management of a virtual patient being managed by a team of HCPs from different healthcare disciplines. A patient presents with fever, confusion, and rapid breathing in the emergency room. AI platform creates a detailed medical record of the patient with past hospital visits, medications, allergies, and baseline health metrics. AI simulates patient’s symptoms in real-time with tachycardia, tachypnea, hypotension, and fever. The trainees interview the virtual patient and AI responds, using NLP, by providing coherent and contextually appropriate answers. The trainees order a set of tests, including blood cultures, a complete blood count, and lactate levels. AI presents realistic test results where blood cultures show a bacterial infection, leukocytosis, and elevated lactate levels. Based on the diagnosis of sepsis, the trainees plan treatment which typically includes oxygen, broad-spectrum antibiotics, and intravenous fluid. AI then adjusts the patient condition based on the trainees’ actions which may lead to improvement in clinical parameters. However, a delayed treatment could lead to worsening symptoms such as septic shock. Furthermore, AI can introduce complications if initial treatments were ineffective or if the trainees commit errors. Thereupon, AI provides real-time feedback on the trainees’ decisions which can highlight missed signs, suggest alternative diagnostic tests, or recommend adjustments to treatment plans. Lastly, AI would generate a summary report of the performance with a breakdown of diagnostic accuracy, treatment efficacy, and adherence to clinical guidelines. MLAs analyze patterns in patient data to assist in diagnosis. In this context, decision trees and neural networks of MLAs analyze vast datasets of patient records to create realistic virtual patients with diverse medical histories and clinical conditions.

There has been a proliferation of empirical research about the powerful role of IPC in medical education [ 12 , 13 ]. IPC fosters shared decision-making, role identification and negotiations, team coherence, and mitigates potential errors [ 14 ]. Through simulated scenarios, HCPs learn to navigate interdisciplinary challenges, appreciate each other’s roles, and develop a shared approach to patient care. Additionally, simulation in healthcare faces the challenges of costs, access, development, and ethical considerations. Nevertheless, the integration of simulation, BDA, VR, AR, and AI heralds a new era of IPC in healthcare, where learning, practice, and innovation converge to shape the future of medicine.

The overarching goal of all healthcare systems focuses on patient safety as reiterated by the World Health Organization (WHO) sustainable development goals [ 15 ]. General Medical Council, Irish Medical Council, Canada MEDs, Accreditation Council for Graduate Medical Education, and EmiatesMEDS are also in agreement with WHO and, in this context, IPC can potentially enhance the quality of care and patient safety [ 16 ]. Though the role of IPC is widely accepted, there is a lukewarm response from medical institutions about its integration into the existing curricula. Professional silos, stereotyping, bureaucratic inertia, and resistant mindsets are some of the deterring factors [ 17 ]. In the era of simulation in healthcare, IPC can be efficiently embedded into this technology-powered educational tool for impactful collaborative teamwork. By harnessing the technological power of VR, AR, and AI, simulation platforms can leverage the indigenous advantage of IPC in clinical training. Once skills acquisition is accomplished in the simulated platform, its recreation in the real world would be a seamless transition of transferable skills.

To sum up, despite an exponential growth in the use of technology-driven simulation in healthcare, educators should be mindful of its careful integration in medical curricula. Clinical training on real patients cannot be replaced by any strategy or tool regardless of its perceived efficiency or effectiveness. Bearing in mind the learning styles of our learners with a preference toward fluid than crystalloid verbal comprehension and fluid reasoning, technology-driven simulation plays a vital role in medical education. A thoughtful integration of simulation pitched at certain courses and modules spiraled across the curriculum will enhance the learning experience of medical and health sciences students and HCPs [ 18 ].

Data availability

No datasets were generated or analysed during the current study.

Choudhury A, Asan O. Role of artificial intelligence in patient safety outcomes: systematic literature review. JMIR Med Inf. 2020;8(7):e18599.

Article   Google Scholar  

Higgins M, Madan CR, Patel R. Deliberate practice in simulation-based surgical skills training: a scoping review. J Surg Educ. 2021;78(4):1328–39.

Watts PI, McDermott DS, Alinier G, Charnetski M, Ludlow J, Horsley E, et al. Healthcare simulation standards of best practiceTM simulation design. Clin Simul Nurs. 2021;58:14–21.

Chrimes D, Moa B, Zamani H, Kuo M-H, editors. Interactive healthcare big data analytics platform under simulated performance. 2016 IEEE 14th Intl Conf on Dependable, Autonomic and Secure Computing, 14th Intl Conf on Pervasive Intelligence and Computing, 2nd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech); 2016: IEEE.

Bateman A, Wood M. Cloud computing. Oxford University Press; 2009. p. 1475.

Twist GP, Gaedigk A, Miller NA, Farrow EG, Willig LK, Dinwiddie DL, et al. Constellation: a tool for rapid, automated phenotype assignment of a highly polymorphic pharmacogene, CYP2D6, from whole-genome sequences. NPJ Genomic Med. 2016;1(1):1–10.

Winkler-Schwartz A, Bissonnette V, Mirchi N, Ponnudurai N, Yilmaz R, Ledwos N, et al. Artificial intelligence in medical education: best practices using machine learning to assess surgical expertise in virtual reality simulation. J Surg Educ. 2019;76(6):1681–90.

Li WT, Ma J, Shende N, Castaneda G, Chakladar J, Tsai JC, et al. Using machine learning of clinical data to diagnose COVID-19: a systematic review and meta-analysis. BMC Med Inf Decis Mak. 2020;20:1–13.

Google Scholar  

Caffò AO, Tinella L, Lopez A, Spano G, Massaro Y, Lisi A, et al. The drives for driving simulation: a scientometric analysis and a selective review of reviews on simulated driving research. Front Psychol. 2020;11:917.

Hsieh M-C, Lee J-J. Preliminary study of VR and AR applications in medical and healthcare education. J Nurs Health Stud. 2018;3(1):1.

Forgione A, Guraya SY. The cutting-edge training modalities and educational platforms for accredited surgical training: a systematic review. J Res Med Sci. 2017;22(1):51.

Sulaiman N, Rishmawy Y, Hussein A, Saber-Ayad M, Alzubaidi H, Al Kawas S, et al. A mixed methods approach to determine the climate of interprofessional education among medical and health sciences students. BMC Med Educ. 2021;21:1–13.

Guraya SY, David LR, Hashir S, Mousa NA, Al Bayatti SW, Hasswan A, et al. The impact of an online intervention on the medical, dental and health sciences students about interprofessional education; a quasi-experimental study. BMC Med Educ. 2021;21:1–11.

Wei H, Corbett RW, Ray J, Wei TL. A culture of caring: the essence of healthcare interprofessional collaboration. J Interprof Care. 2020;34(3):324–31.

Organization WH. Global patient safety action plan 2021–2030: towards eliminating avoidable harm in health care. World Health Organization; 2021.

Guraya SS, Umair Akhtar M, Sulaiman N, David LR, Jirjees FJ, Awad M, et al. Embedding patient safety in a scaffold of interprofessional education; a qualitative study with thematic analysis. BMC Med Educ. 2023;23(1):968.

Supper I, Catala O, Lustman M, Chemla C, Bourgueil Y, Letrilliart L. Interprofessional collaboration in primary health care: a review of facilitators and barriers perceived by involved actors. J Public Health. 2015;37(4):716–27.

Guraya SS, Guraya SY, Al-Qahtani MF. Developing a framework of simulation-based medical education curriculum for effective learning. Med Educ. 2020;24(4):323–31.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Vice Dean College of Medicine, University of Sharjah, Sharjah, United Arab Emirates

Salman Yousuf Guraya

You can also search for this author in PubMed   Google Scholar

Contributions

This is a sole author manuscript. Salman Guraya conceived, prepraed, revieweed, revised, and finalized this editorial artcile.

Corresponding author

Correspondence to Salman Yousuf Guraya .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not applicable as this is an editorial article.

Competing interests

Corresponding author is a senior editorial board memeber of the BMC Medical Education.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Guraya, S.Y. Transforming simulation in healthcare to enhance interprofessional collaboration leveraging big data analytics and artificial intelligence. BMC Med Educ 24 , 941 (2024). https://doi.org/10.1186/s12909-024-05916-y

Download citation

Received : 06 May 2024

Accepted : 14 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1186/s12909-024-05916-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Big data analytics
  • Augmented reality
  • Virtual reality
  • Artificial intelligence
  • Medical education

BMC Medical Education

ISSN: 1472-6920

big data in research

A behind-the-scenes blog about research methods at Pew Research Center. For our latest findings, visit pewresearch.org .

  • Data Science

How Pew Research Center is – and is not – using large language models in our work

Pew Research Center illustration

At Pew Research Center, we’re watching with great interest as new generations of large language models such as GPT, Claude and Gemini develop. These models, also known as LLMs, are machine learning tools that are trained on massive digital datasets to predict and generate humanlike language. They’re sometimes described as a subset of generative artificial intelligence.

As public-facing social scientists committed to innovation, we’re intrigued by what this fast-moving technology might add to our toolkit. As researchers, we’re committed to explaining how the public is reacting to these advances . And as information providers whose fundamental values include accuracy and methodological rigor, we’re moving with great deliberation so as not to affect the quality of our work.

In this post, we’ll share our current guidelines for the internal use of LLMs. We hope to start a dialogue with our peers and broader audiences about the best ways to use this technology as it continues to develop.

LLMs aren’t new to the research community. The computational social scientists on our Data Labs team have regularly used these tools in specialized, narrowly defined research tasks for years.

What is new is that developments in scale, computational capacity and model training have led to a massive leap forward in these models’ general capabilities.

Modern LLMs can perform a much wider variety of tasks, particularly when it comes to interpreting and mimicking human communication. But even though LLMs are increasingly capable (and increasingly integrated into common software), they are just pattern recognition systems. And in their 2024 iteration, at least, they are not guaranteed to provide accurate, factual information. When they fail to produce accurate information, it can be hard to explain what went wrong because of the complexity of their internal workings and the sheer amount of data on which they were trained.

Our commitments when using LLMs

Given the above, the Center’s approach in 2024 is a version of “proceed with caution.” Here are our commitments to ourselves and to our audiences:

Our work is people-centered

Real people, not machines, answer our surveys. Collecting people’s opinions in the United States and abroad is our most important task. We do not use LLMs to create or model “synthetic” public opinion. Our survey results are based on the views reported to us by real people.

Photos on our site are of and by real people. We are not using AI to create images. Human artists create the artwork.

Humans oversee, and are responsible for, every aspect of our work. From survey questionnaires to published research reports, our work begins and ends with human experts. We do not use LLMs to decide on research topics or questionnaire items. We do not use LLMs to identify the storylines and key findings of our reports and blog posts . We believe that trained and experienced humans must guide the process of going from a raw dataset to a report that helps our audience make sense of the data.

Accuracy and rigor remain paramount

In everything we do, we prioritize accuracy and rigor, and our explorations of AI are no different. To the extent that modern technologies can help facilitate our mission to generate a trustworthy foundation of facts, we’re interested in adopting them. But only if they allow us to maintain our existing standards for quality research.

We’ll experiment as a path to innovation

Current areas of experimentation.

In the production of our website. We see real potential in using this technology to help write the code needed to produce our website – a detailed, structured and often repetitive process. While we’re not currently using AI to improve users’ search and navigation experiences on our website, we see that as an area worth exploring in the near term.

In our research process. Coding assistance is also potentially useful to our researchers. For example, we may use an LLM code assistant to format or help write the code needed to analyze survey datasets. But it needs guardrails. At the Center, researchers who are fluent in a coding language can access an LLM coding assistant only if they have a full, human-run code check process .

We’ll also continue experimenting with using LLMs to analyze textual data, such as coding open-ended survey responses into categories or scraping websites for key data. This work has been and will always be overseen by Center staff. We will continue to be transparent and acknowledge in our report methodologies whenever we have used these tools.

In the final stages of our editorial functions. Many widely available tools already use this technology to clean up grammar, punctuation, etc. And we are using these tools, too, though a human verifies our final copy. In our judgment, this level of use does not require external labeling.

Possible future areas of experimentation

We’re watching developments in the publishing industry carefully. Some interesting uses of AI include:

Creating derivative products. We see potential for leveraging LLMs to quickly generate derivative products that would create new access points to our content for a wider array of consumers. This could include drafting social media posts in a variety of styles for a variety of platforms. Or it could mean creating a first-draft, thematic summarization for our topic index landing pages. As of this writing, however, we are not ready to cross this bridge. Our current internal guidance is that any external-facing products need to be human-authored, not just overseen.

Summarizing research in search results. Currently, the search function on the Pew Research Center website delivers a list of links. We hope at some future point to incorporate a “smart search” overlay that would deliver a more pointed summary of our data to interested users. We’re following developments in the accuracy of model results so we can experiment further at the right moment.

We’ll be transparent

For currently approved uses:

  • If we use an LLM in any aspect of our research process, we will note that in the published methodology section.
  • Using an LLM to make minor grammatical, spelling or reading grade level changes is not considered a meaningful use and will not be cited.
  • As made clear above, our developers are already using human-supervised AI to write the code that creates our website.

We’d love to hear from you about your thoughts, hopes and concerns on this topic. We already know it’s one we’ll be revisiting this year and beyond.

Categories:

More from Decoded

More from decoded, what a survey experiment tells us about measuring religious tolerance in australia, measuring partisanship in europe: how online survey questions compare with phone polls, reproducibility as part of code quality control.

To browse all of Pew Research Center findings and data by topic, visit  pewresearch.org

About Decoded

This is a blog about research methods and behind-the-scenes technical matters at Pew Research Center. To get our latest findings, visit pewresearch.org .

Copyright 2024 Pew Research Center

As we transition our order fulfillment and warehousing to W. W. Norton, select titles may temporarily appear as out of stock. We appreciate your patience.

Yale University Press

On The Site

big data in research

Map by Emanuel Bowen on Wikimedia

The Imperial Origins of Big Data

August 28, 2024 | ceb95 | East Asian Studies , Essays , European History , History , Technology

Asheesh Kapur Siddique—

We live in a moment of massive transformation in the nature of information. In 2020, according to one report , users of the Internet created 64.2 zetabytes of data, a quantity greater than the “number of detectable stars in the cosmos,” a colossal increase whose origins can be traced to the emergence of the World Wide Web in 1993. 1 Facilitated by technologies like satellites, smartphones, and artificial intelligence, the scale and speed of data creation seems like it may only balloon over the rest of our lifetimes—and with it, the problem of how to govern ourselves in relation to the inequalities and opportunities that the explosion of data creates.

But while much about our era of big data is indeed revolutionary, the political questions that it raises—How should information be used? Who should control it? And how should it be preserved?—are ones with which societies have long grappled. These questions attained a particular importance in Europe from the eleventh century due to a technological change no less significant than the ones we are witnessing today: the introduction of paper into Europe. Initially invented in China, paper travelled to Europe via the conduit of Islam around the eleventh century after the Moors conquered Spain. Over the twelfth, thirteenth, and fourteenth centuries, paper emerged as the fundamental substrate which politicians, merchants, and scholars relied on to record and circulate information in governance, commerce, and learning. At the same time, governing institutions sought to preserve and control the spread of written information through the creation of archives: repositories where they collected, organized, and stored documents.

The expansion of European polities overseas from the late fifteenth century onward saw governments massively scale up their use of paper—and confront the challenge of controlling its dissemination across thousands of miles of ocean and land. These pressures were felt particularly acutely in what eventually became the largest empire in world history, the British empire. As people from the British isles from the early seventeenth century fought, traded, and settled their way to power in the Atlantic world and South Asia, administrators faced the problem of how to govern both their emigrating subjects and the non-British peoples with whom they interacted. This meant collecting information about their behavior through the technology of paper. Just as we struggle to organize, search, and control our email boxes, text messages, and app notifications, so too did these early moderns confront the attendant challenges of developing practices of collection and storage to manage the resulting information overload. And despite the best efforts of states and companies to control information, it constantly escaped their grasp, falling into the hands of their opponents and rivals who deployed it to challenge and contest ruling powers.

The history of the early modern information state offers no simple or straightforward answers to the questions that data raises for us today. But it does remind us of a crucial truth, all too readily obscured by the deluge of popular narratives glorifying technological innovation: that questions of data are inherently questions about politics—about who gets to collect, control, and use information, and the ends to which information should be put. We should resist any effort to insulate data governance from democratic processes—and having an informed perspective on the politics of data requires that we attend not just to its present, but also to its past.

As I have written elsewhere, “According to what rules should this information be gathered? How should it be used? Who should have access to it? These questions continue to preoccupy our world, much as they did the strange and remote world of the early modern British Empire. The past is at once more distant and more proximate than we may think.” 2

1. “Breaking Down the Numbers: How Much Data Does the World Create Daily in 2024?,” Edge Delta , March 11, 2024, https://edgedelta.com/company/blog/how-much-data-is-created-per-day

2 . Asheesh Kapur Siddique, The Archive of Empire: Knowledge, Conquest, and the Making of the Early Modern British World , Yale University Press, 2024. p180.

Asheesh Kapur Siddique  is assistant professor in the Department of History at the University of Massachusetts Amherst. He is a historian of early America, early modern Europe, and the British Empire. He lives in Northampton, MA.

Recent Posts

  • Ep. 138 – Around the World in Public Art
  • A Most Normal Election Cycle
  • Divided Democracy—The Past is (Frighteningly) Never Dead
  • The Petit Network
  • Surely but Slowly: NATO Adapts to Strategic Competition
  • The How and Why of Ukrainian Resilience and Courage

Sign up for updates on new releases and special offers

Newsletter signup, shipping location.

Our website offers shipping to the United States and Canada only. For customers in other countries:

Mexico and South America: Contact W.W. Norton to place your order. All Others: Visit our Yale University Press London website to place your order.

Notice for Canadian Customers

Due to temporary changes in our shipping process, we cannot fulfill orders to Canada through our website from August 12th to September 30th, 2024.

In the meantime, you can find our titles at the following retailers:

  • Barnes & Noble
  • Powell’s
  • Seminary Co-op

We apologize for the inconvenience and appreciate your understanding.

Shipping Updated

Learn more about Schreiben lernen, 2nd Edition, available now. 

NTRS - NASA Technical Reports Server

Available downloads, related records.

IMAGES

  1. What is Big Data Analytics and Why it is so Important?

    big data in research

  2. 21 Best Big Data Research Topics

    big data in research

  3. Generating Insights from Big Data

    big data in research

  4. What Are Some of the Best Big Data Analytics Visualization Tools?

    big data in research

  5. 5 Steps of the Data Analysis Process

    big data in research

  6. Significance of Big data analytics in the healthcare industry

    big data in research

VIDEO

  1. Using Big Data to Revolutionize Sustainability

  2. Cloud Security:Risks & Strategies for protection

  3. Data Science and Big Data Research Group Live Stream

  4. Big data research Assingment-4

  5. Next Generation Data Integration Platform Apache Seatunnel

  6. KI im Handel; Kontextbezogene Produktempfehlung bei Würth

COMMENTS

  1. Big Data Research

    Big Data for Medicine and Healthcare. Edited by Francesco Piccialli, Nik Bessis, Gwanggil Jeon, Fabio Giampaolo. 4 July 2022. View all special issues and article collections. View all issues. Read the latest articles of Big Data Research at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.

  2. 15 years of Big Data: a systematic literature review

    Big Data is still gaining attention as a fundamental building block of the Artificial Intelligence and Machine Learning world. Therefore, a lot of effort has been pushed into Big Data research in the last 15 years. The objective of this Systematic Literature Review is to summarize the current state of the art of the previous 15 years of research about Big Data by providing answers to a set of ...

  3. Home page

    Find breakthrough research in Journal of Big Data, an open access journal with 8.6 Impact Factor and 56 million annual downloads. Open for ...

  4. Frontiers in Big Data

    Research Topics. This innovative journal focuses on the power of big data - its role in machine learning, AI, and data mining, and its practical application from cybersecurity to climate science and public health.

  5. A review of big data and medical research

    In this descriptive review, we highlight the roles of big data, the changing research paradigm, and easy access to research participation via the Internet fueled by the need for quick answers. Universally, data volume has increased, with the collection rate doubling every 40 months, ever since the 1980s. 4 The big data age, starting in 2002 ...

  6. Articles

    Find breakthrough research in Journal of Big Data, an open access journal with 8.6 Impact Factor and 56 million annual downloads. Open for ...

  7. What is Big Data? A Consensual Definition and a Review of Key Research

    PDF | Although Big Data is a trending buzzword in both academia and the industry, its meaning is still shrouded by much conceptual vagueness. The term... | Find, read and cite all the research you ...

  8. Big Data, new epistemologies and paradigm shifts

    This article examines how the availability of Big Data, coupled with new data analytics, challenges established epistemologies across the sciences, social sciences and humanities, and assesses the extent to which they are engendering paradigm shifts across multiple disciplines. In particular, it critically explores new forms of empiricism that ...

  9. Moving back to the future of big data-driven research: reflecting on

    Moving back to the future of big data-driven research: reflecting on the social in genomics Melanie Goisauf, Kaya Akyüz & Gillian M. Martin

  10. Big Data Research

    Read the latest articles of Big Data Research at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.

  11. The impact of big data on research methods in information science

    Research methods are roadmaps, techniques, and procedures employed in a study to collect data, process data, analyze data, yield findings, and draw a conclusion to achieve the research aims. To a large degree the availability, nature, and size of a dataset can affect the selection of the research methods, even the research topics.

  12. Big data in basic and translational cancer research

    Indeed, the combination of big data, bioinformatics and artificial intelligence has led to notable advances in our basic understanding of cancer biology and to translational advancements.

  13. What is your definition of Big Data? Researchers' understanding of the

    The term Big Data is commonly used to describe a range of different concepts: from the collection and aggregation of vast amounts of data, to a plethora of advanced digital techniques designed to reveal patterns related to human behavior. In spite of ...

  14. Top 20 Latest Research Problems in Big Data and Data Science

    E ven though Big data is in the mainstream of operations as of 2020, there are still potential issues or challenges the researchers can address. Some of these issues overlap with the data science field. In this article, the top 20 interesting latest research problems in the combination of big data and data science are covered based on my personal experience (with due respect to the ...

  15. Big Data and Big Data Analytics: Concepts, Types and Technologies

    Big Data Anlytics refers to the process of collecting, organizing, analyzing large data sets to discover dif ferent. patterns and other useful information. Big data analytics is a. set of ...

  16. Big data in digital healthcare: lessons learnt and ...

    Here, we survey the current challenges in Big Data in healthcare and use oncology as an instructive vignette, highlighting issues of data ownership, sharing, and privacy.

  17. Ethics review of big data research: What should stay and what should be

    Background The debate about the adequacy of the Ethics Review Committee (ERC) as the chief oversight body for big data studies is partly rooted in the historical evolution of the ERC. Particularly relevant is the ERC's changing response to new methods and technologies in scientific research. ERCs—also known as Institutional Review Boards (IRBs) or Research Ethics Committees (RECs)—came ...

  18. Ten simple rules for responsible big data research

    The Council was charged with providing guidance to the NSF on how to best encourage ethical practices in scientific and engineering research, utilizing big data research methods and infrastructures [ 1 ]. 1. Acknowledge that data are people and can do harm. One of the most fundamental rules of responsible big data research is the steadfast ...

  19. Scientific Research and Big Data

    Scientific Research and Big Data. First published Fri May 29, 2020. Big Data promises to revolutionise the production of knowledge within and beyond science, by enabling novel, highly efficient ways to plan, conduct, disseminate and assess research. The last few decades have witnessed the creation of novel ways to produce, store, and analyse ...

  20. Big data

    Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity. [4] The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and ...

  21. A demonstration of the enviromics approach to integrating environmental

    With the expansion of technologies available to biological science has come an enormous rise in the amount and diverse nature of data. How we interrogate and combine 'big data' in different biological contexts has become the new challenge for crop biologists, be it at the genetic, phenotypic or environmental level (Pal et al., 2020).An enormous amount of environmental data is now being ...

  22. What is your definition of Big Data? Researchers ...

    The term Big Data is commonly used to describe a range of different concepts: from the collection and aggregation of vast amounts of data, to a plethora of advanced digital techniques designed to reveal patterns related to human behavior. In spite of its widespread use, the term is still loaded with conceptual vagueness. The aim of this study is to examine the understanding of the meaning of ...

  23. 2025 Nursing Knowledge: Big Data Science Conference

    2025 Nursing Knowledge: Big Data Science Conference Current date: 2024-08-29T19:35:55-0500 , end date: 2025-06-06T17:00:00-0500. Wednesday, June 4 - Friday, June 6. Wednesday, June 4, 2025. ... Research ethics at the University of Minnesota. We are committed to protecting research participants, upholding ethical standards, and improving our ...

  24. Big data quality framework: a holistic approach to continuous quality

    Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and ...

  25. Purdue's online data science master's addresses burgeoning demand for

    Students will develop expertise in programming languages, gaining the ability to design and implement data-driven solutions; learn to apply advanced technologies, including cloud computing and big data frameworks, to effectively handle and process large-scale datasets; gain a deep understanding of machine learning algorithms and models ...

  26. Big Data Research

    There are three types of Articles in Press: Journal pre-proofs: versions of an article that have undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but are not yet definitive versions of record. These versions will undergo additional copyediting, typesetting and review ...

  27. Transforming simulation in healthcare to enhance interprofessional

    Simulation in healthcare, empowered by big data analytics and artificial intelligence (AI), has the potential to drive transformative innovations towards enhanced interprofessional collaboration (IPC). This convergence of technologies revolutionizes medical education, offering healthcare professionals (HCPs) an immersive, iterative, and dynamic simulation platform for hands-on learning and ...

  28. and is not

    The computational social scientists on our Data Labs team have regularly used these tools in specialized, narrowly defined research tasks for years. What is new is that developments in scale, computational capacity and model training have led to a massive leap forward in these models' general capabilities.

  29. The Imperial Origins of Big Data

    Asheesh Kapur Siddique— We live in a moment of massive transformation in the nature of information. In 2020, according to one report, users of the Internet created 64.2 zetabytes of data, a quantity greater than the "number of detectable stars in the cosmos," a colossal increase whose origins can be traced to the emergence of the World Wide Web in 1993. 1 Facilitated by technologies like ...

  30. Characterization of Large Drop Velocity in the NASA Icing Research

    This paper presents experimental work conducted in the Icing Research Tunnel at NASA Glenn Research Center to characterize the velocity of large drops in the test section. Some icing spray clouds with large drops were generated with Mod1 nozzles at low nozzle air pressure of 2 to 4 psig for various tunnel air speeds. Drop diameters and drop velocities were measured via high-resolution imaging ...