Research
Methodology
Section C
Chapter 11 –
Quantitative Data HMPYC 8
Collection Methods
, ➢ 1. Introduction:
• Once you have established the research design to use in your study, it is time to identify the most suitable da
method that fits the particular design and the circumstances of the research project.
• A direct relationship exists between the chosen research design and the data collection method.
- The data collection method represents the how? of observation.
• Since quantitative data collection methods often employ measuring instruments, the term ‘measuring instru
to previously developed instruments such as structured observation schedules, structured interview schedul
questionnaires, checklists, indexes and scales.
➢ 2. Concepts of Measurement:
• Measurement implies observation of complex social phenomena by means of a numerical schema to evalu
statements or items that reflect components of the phenomenon being studied.
• Number values are allocated using a Likert scale, enabling us to observe quantities of the phenomenon.
- An indicator is an observation that is assumed to reflect an attribute or property of a phenomenon.
• Whether you are then able to judge if a participant fits the characteristics of depression depends on how t
participant responded to each of the indicators. Scaling ensures that numbers are assigned in a consistent
measurement becomes a systematic means of creating objective scientific knowledge that enhances the su
knowledge base with the empirical evidence.
, ➢ 2. Concepts of Measurement:
• To obtain valid and reliable data you must ensure, before implementing the study, that the measurement pro
the instruments used have acceptable levels of reliability and validity.
• Validity and reliability are two of the most important concepts in measurement.
➢ 3. Validity and Reliability of Measuring Instruments:
➢3.1. Validity:
• According to Salkind, ‘validity refers to the property of an assessment tool that indicates that the tool does
it does’.
• In more technical terms, De Vellis describes validity as the extent to which a variable represents the underl
construct of measurement or phenomenon, as the cause of item covariation, and thereby stresses the fact
is all about whether the taste or instrument actually measures what you have set out to measure.
• The definition of validity is twofold:
(i) the instrument actually measures the phenomenon in question; And
(ii) the concept is measured accurately.
- It is possible to have the first without the second, but not the other way around.
• Validity refers broadly to the degree to which an instrument is doing what it should do, to which it meets it
- An instrument may have several purposes that vary in number, kind and scope.
, ➢ 3.1. Validity:
• One of the most common and useful classification schemes attempting to categorize the validity's underlying
measurement is content, face, criterion and construct validity.
- Content and face validity may be established prior to data collection, while criterion and construct validity a
established once the instrument has been used to collect data.
• Different types of validity include :
(i) Content Validity
(ii) Face Validity
(iii) Criterion (or criterion-related) Validity
(iv) Construct Validity
➢A. Content Validity:
• This is concerned with the representativeness or sampling adequacy of the content of an instrument.
• According to DeVellis, this type of validity has to do with the degree to which a range of items covers the ran
meanings included in the definition of domain.
- He adds that the scope of items needs to sufficiently represent the concept of measurement, and therefore
present too narrow of representation or too wide by including in relevant items.
• A valid measure would provide an adequate or representative sample of all content or elements or instances
phenomenon being measured.