Article 1: Art of Reading an Article in the Journal
Why this guide? Research output is exploding; clinicians/students need a systematic
way to select, read, and apply papers (esp. in fast-changing areas like COVID-19). Not all
published work is clinically relevant, so quality appraisal matters.
Example stats: medical literature volume rose sharply over decades; to stay fully
updated a generalist would need to read ~17 articles/day.
Core rule & overall approach
Do not read linearly from start to finish. First scan Title → Abstract → Conclusions to
judge relevance; only then read the whole paper. A ten-step logical method is advised.
Standard article structure & what to look for
- Title: should be specific/descriptive enough to signal study type/content.
- Abstract: often structured (background/methods/results/conclusions); use it to
answer what/why/how/results before committing to the full text.
- Introduction: gives rationale, prior work, gap, aims/hypothesis.
- Methods: participants, sampling, inclusion/exclusion, variables,
procedures/equipment; enough detail to replicate; check appropriateness.
- Results: data in tables/figures; no interpretation here. Check reliability/validity,
attrition accounting, and whether the right statistics were used (examples listed
in Table 2).
- Discussion: interprets findings, compares with prior studies, notes
strengths/limitations; no new data should appear.
- Conclusion: re-read at the end to confirm understanding and clinical meaning;
apply the five Cs:
o Category (type of paper)
o Context (relation to other studies)
o Correctness (valid assumptions)
o Contributions (new knowledge added)
o Clarity (quality of writing)
Critical reading mindset
Published papers are not infallible; question assumptions, methods, biases, and
generalizability. Journal clubs/teaching reinforce deep understanding (“to teach is to
learn twice”).
,Article 2: Improving the DSM-5 approach to cognitive impairment: Developmental
prosopagnosia reveals the need for tailored diagnoses
The DSM-5 is the main guideline for diagnosing mental disorders.
Impairments are assessed across 6 domains:
1. Perceptual-motor function
2. Language
3. Memory/learning
4. Social cognition
5. Attention
6. Executive function
A diagnosis requires scoring more than 1 SD below average on 2 tasks.
This approach is helpful because it provides standardized guidance, but it is also
criticized:
- Risk of false positives: about 16% of healthy people would appear ‘impaired’ on a
single test.
- Risk of false negatives: some patients with real cognitive issues may test within
the ‘normal’ range and be missed.
Missed diagnoses are a major issue in conditions like Long COVID and dementia, where
patients may look similar to healthy controls on cognitive tests.
Clinicians must recognize that the DSM-5 criteria are not infallible and can misclassify
patients in both directions.
Developmental Prosopagnosia (DP)
Developmental Prosopagnosia (DP): lifelong digiculty in recognizing faces.
Has serious social and emotional consequences (e.g., anxiety, low confidence,
relationship problems).
Likely has a genetic component and is linked to various neural digerences.
When using cut-ogs like -2 SDs, up to 85% of DP cases go undiagnosed. Even with the
more lenient DSM-5 criteria, many diagnoses would be missed.
Reasons for missed diagnosis
Subjective complaints vs. cut-oCs:
- Many DP cases describe very clear, real-life problems recognizing familiar
people, yet still ‘fail’ to meet DSM-5 cut-ogs.
- This shows the risk of dismissing self-reports simply because patients score
above arbitrary thresholds.
Over-reliance on certain tests (e.g., CFMT):
- The Cambridge Face Memory Test (CFMT) is widely used but misses 25% of DP
cases at -2SD and 12,5% at -1SD.
- Developers warned against using it as the sole diagnostic tool, but many
researchers/clinicians still do.
- This means some patients are wrongly told they do not have DP.
Ecological validity issues:
- Tests often fail to capture real-world problems.
- Example: Famous Faces Test (FFT) uses celebrity photos, but familiarity with
celebrities varies by culture, age and interest.
, - Around 15-35% of self-identified DP cases score above the -1SD cutog on FFT,
even though they struggle daily with familiar people.
- Possible reasons: neurotypicals may also perform worse with static photos
(shifting the comparison), or DP brains may treat celebrities digerently from
acquaintances.
Test design limitations:
- Using single, static images ignores the importance of movement and context in
real-life recognition.
- Computer-based tasks may underestimate the true digiculties DP patients face.
Test-retest reliability problems:
- A person can get diagnosis one day but not the next.
- Example: in one study, 29% of DP cases changed diagnostic status when
retested, simply crossing the -1 SD cutog in digerent directions.
- Some patients may also self-test online with the CFMT before clinical
assessment, which can further distort results.
Problems with two diagnostic tests
Two-test rule increases missed diagnoses:
- DSM-5 requires scoring below -1SD on 2 tests, this worsens the problem since all
tests have validity/reliability flaws.
- Diagnostic ‘power’ is limited by the weaker test, meaning many true cases are
excluded.
Low statistical power of common tests:
- Requiring both tests reduces overall power further.
Excluding objectively severe cases:
- Some patients score extremely low on one test but perform better on another.
- DSM-5 rules mean they are excluded, even though their scores show highly
atypical functioning.
If one test is nearly perfect, the second adds harm:
- A highly valid test (e.g., familiar face recognition) could diagnose nearly all DP
cases.
- But DSM-5 still requires a second, less sensitive test, which forces
misclassification.
Unclear guidance on which tests to use:
- DSM-5 gives only vague categories, leaving clinicians to assume digerent tests
are equally valid.
- In DP, familiar face recognition is most relevant, while perception or unfamiliar
face tests are less sensitive.
Correlation bias between tests:
The relationship between 2 tests (e.g., strongly correlated vs. uncorrelated) biases which
patients get diagnosed.
This can mean severely impaired patients are excluded simply because their
scores don’t align across tests.
Consequences of missed diagnoses
Patient impact:
- Patients may doubt their own experiences or sanity when told nothing is wrong.
, - Without a diagnosis, they cannot access insurance, treatment or workplace
accommodations.
- This is especially harmful when treatments work best in early stages.
- Families are also negatively agected by lack of recognition and support.
Scientific impact:
- Excluding milder cases skews prevalence estimates and egect sizes, making
disorder look rarer or more severe than they are.
- Cognitive models may be distorted: missed cases can flip findings from
‘overlapping functions’ to ‘separate functions’, undermining theory.
- Neuroimaging results may appear abnormal only in the most extreme cases,
leading to false conclusions about which brain regions are critical.
- Treatment trials become biased:
o Severe cases may appear to improve due to regression to the mean, not
real treatment egects.
o Mild cases (who might benefit most) are excluded entirely.
Overall: missed diagnoses harm both patients and science, reducing trust in clinical and
research findings.
Validating a symptom-based approach
Proposed alternative: use symptom questionnaires that capture lived experiences and
validate them against objective data.
DSM-5 vs. symptom-based approach results
- DSM-5 diagnosed only 62% of self-identified DP cases.
- Symptom-based approach (PI20 questionnaire) identified 100% as atypical.
Excluded cases reported slightly fewer problems than diagnosed ones but still scored
very abnormally.
In replication in a large dataset, 30% of DP cases were missed by DSM-5 criteria.
Meta-analyses remained significant even after controlling for overlapping measures ->
confirms that excluded cases have real, replicable cognitive impairments.
Discussion
The DSM-5 approach is too rigid and does not account for the heterogeneity of
conditions like DP.
A symptom-based approach (PI20) is:
1. Faster and less burdensome (minute vs. 40-60 min test battery).
2. More reliable over time (better test-retest performance).
3. Less biased by gender, ethnicity or age, whereas standardized cognitive tests
often underestimate certain groups.
Small modifications to the DSM-5 approach could already improve sensitivity, such as:
- Allowing a diagnosis at -2SD on a single test.
- Including reaction times alongside accuracy measures.
Symptom-based methods are especially valuable in clinical practice where resources
are limited, and they can capture impairments overlooked by traditional tests.
Limitation: questionnaires must be validated and culturally adapted, but the PI20
performed exceptionally well, identifying 100% of DP cases as atypical.