Unit 4d: Laboratory Techniques and their Application
Issue date: 10/06/2022
Deadline: 24/06/2022
Role of LIMS in the Laboratory
Laboratory information management system (LIMS) refer to a class or collection of software that
supports the storage and management of information obtained in a modern laboratory. There is
significant variation in terms of functionality between software-based implementations of LIMS. It is
a continually evolving concept with new features being added in line with technological progression
and changing laboratory demands. Despite these variations, there are core functions that appear
frequently in most, if not all LIMS.
A traditional core function of LIMS was sample management within the laboratory (Skobelev et al.,
2011). This, a formerly time-consuming manual process involving physical tracking on paper, (Gibbon
et al., 1996) is now streamlined through the use of LIMS. We will now discuss how sample
management may be conducted with the aid of modern LIMS software. Upon registration of a
sample with LIMS, a unique identifier and barcode, which could be affixed to the container of the
sample, are generated corresponding to the sample. The location of the sample, e.g. a particular
freezer or rack, may be tracked alongside other parameters corresponding to the sample (e.g.
clinical and phenotypic information). A chain of custody, i.e. the authority or group responsible for
handling the sample, is an additional common function. This rigorous management ensures samples
and LIMS data is fully tracked, ensuring an audit trail is maintained. The example discussed
represents a general theoretical example of how LIMS software may handle sample management; it
will vary greatly between individual laboratories and in separate implementations of LIMS. Other
commonly found functions include but are not limited to: integration of instruments with LIMS (e.g.
automatically record data autonomously); electronic data exchange (e.g. exchanging orders and
invoices); and compliance with regulatory standards. This
Data Mining and Big Data
Although we have already extensively explored the role of LIMS in sample management, modern
LIMS have expanded in scope with many LIMS now including methods for data analysis and data
mining. In this section, we will be discussing how data mining is used in extracting useful scientific
information from large data sets. Data mining can be defined as the process of extracting patterns
from large data sets involving machine learning, statistics, and database systems. Data mining is not
an independent process that can be performed on large data sets, but rather it is merely a singular
step in a larger process. This is known as the knowledge discovery in database (KDD) process.
Knowledge discovery refers to the ability to derive patterns, or otherwise abstractions, from large
data sets. This commonly involves five distinct stages: selection, pre-processing, transformation,
data mining, and interpretation (Han et al., 2012). An in-depth explanation of data pre-processing or
particular data mining techniques are beyond the scope of this report, so we will discuss these topics
only in general terms. In essence, the core role of pre-processing is to remove or “clean”
imperfections in data sets as commonly they can be missing data, contain irrelevant information,
Issue date: 10/06/2022
Deadline: 24/06/2022
Role of LIMS in the Laboratory
Laboratory information management system (LIMS) refer to a class or collection of software that
supports the storage and management of information obtained in a modern laboratory. There is
significant variation in terms of functionality between software-based implementations of LIMS. It is
a continually evolving concept with new features being added in line with technological progression
and changing laboratory demands. Despite these variations, there are core functions that appear
frequently in most, if not all LIMS.
A traditional core function of LIMS was sample management within the laboratory (Skobelev et al.,
2011). This, a formerly time-consuming manual process involving physical tracking on paper, (Gibbon
et al., 1996) is now streamlined through the use of LIMS. We will now discuss how sample
management may be conducted with the aid of modern LIMS software. Upon registration of a
sample with LIMS, a unique identifier and barcode, which could be affixed to the container of the
sample, are generated corresponding to the sample. The location of the sample, e.g. a particular
freezer or rack, may be tracked alongside other parameters corresponding to the sample (e.g.
clinical and phenotypic information). A chain of custody, i.e. the authority or group responsible for
handling the sample, is an additional common function. This rigorous management ensures samples
and LIMS data is fully tracked, ensuring an audit trail is maintained. The example discussed
represents a general theoretical example of how LIMS software may handle sample management; it
will vary greatly between individual laboratories and in separate implementations of LIMS. Other
commonly found functions include but are not limited to: integration of instruments with LIMS (e.g.
automatically record data autonomously); electronic data exchange (e.g. exchanging orders and
invoices); and compliance with regulatory standards. This
Data Mining and Big Data
Although we have already extensively explored the role of LIMS in sample management, modern
LIMS have expanded in scope with many LIMS now including methods for data analysis and data
mining. In this section, we will be discussing how data mining is used in extracting useful scientific
information from large data sets. Data mining can be defined as the process of extracting patterns
from large data sets involving machine learning, statistics, and database systems. Data mining is not
an independent process that can be performed on large data sets, but rather it is merely a singular
step in a larger process. This is known as the knowledge discovery in database (KDD) process.
Knowledge discovery refers to the ability to derive patterns, or otherwise abstractions, from large
data sets. This commonly involves five distinct stages: selection, pre-processing, transformation,
data mining, and interpretation (Han et al., 2012). An in-depth explanation of data pre-processing or
particular data mining techniques are beyond the scope of this report, so we will discuss these topics
only in general terms. In essence, the core role of pre-processing is to remove or “clean”
imperfections in data sets as commonly they can be missing data, contain irrelevant information,