Session 1
Evidence Based Assessment (EBP) definition:
Best available research + Clinical expertise + Patient characteristics, culture, and preferences.
EBP purpose: promote effective psychological practice and enhance public health by applying empirically
supported principles of psychological assessment, case formulation, therapeutic relationship, and intervention.
The three-legged stool of EBP
In CCAP, EBP can be seen as a 3-legged stool:
Each leg represents an essential evidence-based component:
1. Effective methods (e.g., evidence-based treatments, assessments, EBTs and EBAs)
2. Clinician factors (e.g., training, decision- making, therapeutic alliance)
3. Client factors (e.g., culture, comorbidity, family involvement)
These stand on a “mat” of epistemic processes – the ways we gain knowledge (like RCTs, qualitative research,
clinical experience). If any leg is weak or missing, practice becomes unstable.
This updated model highlights that we need evidence for clinician and client factors too, not just for the
treatment themselves.
Evidence base for any psychological intervention should be evaluated in terms of 2 separate dimensions:
1. Efficacy (does it work?) = the systematic and scientific evaluation of whether a treatment works. Lays
out criteria for the evaluation of the strength of evidence pertaining to establishing causal
relationships between interventions and disorders under treatment.
2. Clinical utility (is it usable in real-world settings) = the applicability, feasibility, and usefulness of the
intervention in the local or specific setting where it is to be offered.
EBP Empirically supported treatments (EST)
Starts with patient Starts with treatment
Integrates multiple evidence types Focuses on RCT
Broader (includes case formulation, alliance, Narrower (only treatment efficacy in controlled
assessment) clinical trials)
Best available research
Best research evidence refers to scientific results related to intervention strategies, assessment, clinical
problems, and patient populations in laboratory and field settings as well as to clinically relevant results of
basic research in psychology and related fields.
Multiple types of research evidence that contribute to effective psychological practice:
- Efficacy
- Effectiveness
- Cost-effectiveness
, - Cost- benefit
- Epidemiological
- Treatment utilization
Research designs
- Clinical observation (including individual case studies) and basic psychological science are valuable
sources of innovations and hypotheses (the context of scientific discovery).
- Qualitative research can be used to describe the subjective, lived experiences of people, including
participants in psychotherapy.
- Systematic case studies are particularly useful when aggregated—as in the form of practice research
networks—for comparing individual patients with others with similar characteristics.
- Single-case experimental designs are particularly useful for establishing causal relationships in the
context of an individual.
- Public health and ethnographic research are especially useful for tracking the availability, utilization,
and acceptance of mental health treatments as well as suggesting ways of altering these treatments
to maximize their utility in a given social context.
- Process– outcome studies are especially valuable for identifying mechanisms of change.
- Studies of interventions as these are delivered in naturalistic settings (effectiveness research) are well
suited for assessing the ecological validity of
treatments.
- RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences
about the effects of interventions (context of scientific verification).
- Meta-analysis is a systematic means to synthesize results from multiple studies, test hypotheses, and
quantitatively estimate the size of effects.
“randomized controlled experiments represent a more stringent way to evaluate treatment efficacy because
they are the most effective way to rule out threats to internal validity in a single experiment.”
Clinical expertise
Is essential for identifying and integrating the best research evidence with clinical data (e.g., information about
the patient obtained over the course of treatment) in the context of the patient’s characteristics and
preferences to deliver services that have the highest probability of achieving the goals of therapy.
Involves:
- Assessment, diagnosis, systematic case formulation, treatment planning
- Clinical decision making, treatment implementation, monitoring patient progress
- Interpersonal expertise à is manifested in forming a therapeutic relationship, encoding and
decoding verbal and nonverbal responses, creating realistic but positive expectations, and responding
empathically to the patient’s explicit and implicit experiences and concerns.
, - Self-reflection and acquisition of skills
- Evaluation and use of research evidence in both basic and applied psychological science
- Understanding the influence of individual and cultural differences on treatment
- Seeking available resources
- Having a rationale for clinical strategies.
Patient characteristics, culture & preverences:
Psychological services are most likely to be effective when they are responsive to the patient’s specific
problems, strengths, personality, sociocultural context, and preferences.
Psychologists must consider:
- Sociocultural context (ethinicity, race, gender, religion)
- Personality, developmental stage, symptom profile
- Treatment expectations and values
Exam relevant examples:
- RCTs = gold standard for causal inference, but not the only valid evidence.
- Clinical utility = patient acceptability, cost-benefit, feasibility in the real world.
- Biases in judgment (e.g., confirmation bias, overconfidence) must be managed with feedback &
consultation.
Schmidt & Schimmelmann (2013) provide a critical reflection on the state of evidence-based psychotherapy
for youth, identifying research quality problems, narrow outcome measures, and a failure to adapt
interventions to real-world needs.
Child and adolescent psychotherapy research has shown remarkable progress towards evidence-informed
treatments in recent years. However, the field still faces major limitations:
Methodological aspects:
1. Quality of research is still weak
- Many studies are underpowered (small sample sizes).
- Very few studies include long-term follow-ups.
- Reliance on few and often non-blinded informants (e.g., teachers) which may lead to
biased/inconsistent information about treatment effects.
- Many report only short-term symptoms change (avg. 5 months post-treatment).
Implication: We don’t know if treatment effects last, or whether booster sessions help.
2. Outcome measures are too narrow
- Current research mostly focuses on
• Symptoms only (e.g., anxiety scores)
• Standardized tools in lab-like settings
- What’s missing:
, • Real- world functioning (school, peers, family life)
• Idiographic change (what matters to this child)
• Use of multiple informants (parent + teacher + child = better picture)
Suggested improvements in methods
Problem Suggested tool
Lack of real-world data Event Sampling Method (ESM) = real-time reports of thoughts/feelings
Symptom-only focus Youth Top Problems = client-chosen issues tracked over time
Conflicting results Range of Possible Changes (RPC) model to handle inconsistencies
One-informant bias Use multiple sources + behavioral observations
Conceptual challenges:
1. Adult models applied to youth
- Many child CBTs are just ‘downward extensions’ of adult protocols.
- This assumes that children experience and respond to therapy like adults, which is not backed by
evidence.
- Implication: we need to validate and tailor models to age and developmental level &
cognitive/emotional readiness.
2. Lab vs real world
- Efficacy in RCTs ≠ effectiveness in community settings
- Why do treatments often fail in real-life care?
• More comorbidity
• Chaotic environments
• Less client motivation
• Clinicians don’t follow strict manuals
Proposed solution: modular/stepped-care models:
- Modular treatment (e.g., MATCH-ADTC): choose components that fit the individual child.
- Stepped care: start with minimal intervention and increase intensity as needed
- These approaches improve flexibility, cover comorbidity, allow individualization
Future directions:
Identify moderators à to know for whom treatments work best (e.g., age, gender, motivation)
Identify mediators à to understand how treatments work (e.g., therapeutic alliance, self-efficacy)
These are essential for:
- Understanding mechanisms of change
- Making therapies more targeted and personalized
Evidence Based Assessment (EBP) definition:
Best available research + Clinical expertise + Patient characteristics, culture, and preferences.
EBP purpose: promote effective psychological practice and enhance public health by applying empirically
supported principles of psychological assessment, case formulation, therapeutic relationship, and intervention.
The three-legged stool of EBP
In CCAP, EBP can be seen as a 3-legged stool:
Each leg represents an essential evidence-based component:
1. Effective methods (e.g., evidence-based treatments, assessments, EBTs and EBAs)
2. Clinician factors (e.g., training, decision- making, therapeutic alliance)
3. Client factors (e.g., culture, comorbidity, family involvement)
These stand on a “mat” of epistemic processes – the ways we gain knowledge (like RCTs, qualitative research,
clinical experience). If any leg is weak or missing, practice becomes unstable.
This updated model highlights that we need evidence for clinician and client factors too, not just for the
treatment themselves.
Evidence base for any psychological intervention should be evaluated in terms of 2 separate dimensions:
1. Efficacy (does it work?) = the systematic and scientific evaluation of whether a treatment works. Lays
out criteria for the evaluation of the strength of evidence pertaining to establishing causal
relationships between interventions and disorders under treatment.
2. Clinical utility (is it usable in real-world settings) = the applicability, feasibility, and usefulness of the
intervention in the local or specific setting where it is to be offered.
EBP Empirically supported treatments (EST)
Starts with patient Starts with treatment
Integrates multiple evidence types Focuses on RCT
Broader (includes case formulation, alliance, Narrower (only treatment efficacy in controlled
assessment) clinical trials)
Best available research
Best research evidence refers to scientific results related to intervention strategies, assessment, clinical
problems, and patient populations in laboratory and field settings as well as to clinically relevant results of
basic research in psychology and related fields.
Multiple types of research evidence that contribute to effective psychological practice:
- Efficacy
- Effectiveness
- Cost-effectiveness
, - Cost- benefit
- Epidemiological
- Treatment utilization
Research designs
- Clinical observation (including individual case studies) and basic psychological science are valuable
sources of innovations and hypotheses (the context of scientific discovery).
- Qualitative research can be used to describe the subjective, lived experiences of people, including
participants in psychotherapy.
- Systematic case studies are particularly useful when aggregated—as in the form of practice research
networks—for comparing individual patients with others with similar characteristics.
- Single-case experimental designs are particularly useful for establishing causal relationships in the
context of an individual.
- Public health and ethnographic research are especially useful for tracking the availability, utilization,
and acceptance of mental health treatments as well as suggesting ways of altering these treatments
to maximize their utility in a given social context.
- Process– outcome studies are especially valuable for identifying mechanisms of change.
- Studies of interventions as these are delivered in naturalistic settings (effectiveness research) are well
suited for assessing the ecological validity of
treatments.
- RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences
about the effects of interventions (context of scientific verification).
- Meta-analysis is a systematic means to synthesize results from multiple studies, test hypotheses, and
quantitatively estimate the size of effects.
“randomized controlled experiments represent a more stringent way to evaluate treatment efficacy because
they are the most effective way to rule out threats to internal validity in a single experiment.”
Clinical expertise
Is essential for identifying and integrating the best research evidence with clinical data (e.g., information about
the patient obtained over the course of treatment) in the context of the patient’s characteristics and
preferences to deliver services that have the highest probability of achieving the goals of therapy.
Involves:
- Assessment, diagnosis, systematic case formulation, treatment planning
- Clinical decision making, treatment implementation, monitoring patient progress
- Interpersonal expertise à is manifested in forming a therapeutic relationship, encoding and
decoding verbal and nonverbal responses, creating realistic but positive expectations, and responding
empathically to the patient’s explicit and implicit experiences and concerns.
, - Self-reflection and acquisition of skills
- Evaluation and use of research evidence in both basic and applied psychological science
- Understanding the influence of individual and cultural differences on treatment
- Seeking available resources
- Having a rationale for clinical strategies.
Patient characteristics, culture & preverences:
Psychological services are most likely to be effective when they are responsive to the patient’s specific
problems, strengths, personality, sociocultural context, and preferences.
Psychologists must consider:
- Sociocultural context (ethinicity, race, gender, religion)
- Personality, developmental stage, symptom profile
- Treatment expectations and values
Exam relevant examples:
- RCTs = gold standard for causal inference, but not the only valid evidence.
- Clinical utility = patient acceptability, cost-benefit, feasibility in the real world.
- Biases in judgment (e.g., confirmation bias, overconfidence) must be managed with feedback &
consultation.
Schmidt & Schimmelmann (2013) provide a critical reflection on the state of evidence-based psychotherapy
for youth, identifying research quality problems, narrow outcome measures, and a failure to adapt
interventions to real-world needs.
Child and adolescent psychotherapy research has shown remarkable progress towards evidence-informed
treatments in recent years. However, the field still faces major limitations:
Methodological aspects:
1. Quality of research is still weak
- Many studies are underpowered (small sample sizes).
- Very few studies include long-term follow-ups.
- Reliance on few and often non-blinded informants (e.g., teachers) which may lead to
biased/inconsistent information about treatment effects.
- Many report only short-term symptoms change (avg. 5 months post-treatment).
Implication: We don’t know if treatment effects last, or whether booster sessions help.
2. Outcome measures are too narrow
- Current research mostly focuses on
• Symptoms only (e.g., anxiety scores)
• Standardized tools in lab-like settings
- What’s missing:
, • Real- world functioning (school, peers, family life)
• Idiographic change (what matters to this child)
• Use of multiple informants (parent + teacher + child = better picture)
Suggested improvements in methods
Problem Suggested tool
Lack of real-world data Event Sampling Method (ESM) = real-time reports of thoughts/feelings
Symptom-only focus Youth Top Problems = client-chosen issues tracked over time
Conflicting results Range of Possible Changes (RPC) model to handle inconsistencies
One-informant bias Use multiple sources + behavioral observations
Conceptual challenges:
1. Adult models applied to youth
- Many child CBTs are just ‘downward extensions’ of adult protocols.
- This assumes that children experience and respond to therapy like adults, which is not backed by
evidence.
- Implication: we need to validate and tailor models to age and developmental level &
cognitive/emotional readiness.
2. Lab vs real world
- Efficacy in RCTs ≠ effectiveness in community settings
- Why do treatments often fail in real-life care?
• More comorbidity
• Chaotic environments
• Less client motivation
• Clinicians don’t follow strict manuals
Proposed solution: modular/stepped-care models:
- Modular treatment (e.g., MATCH-ADTC): choose components that fit the individual child.
- Stepped care: start with minimal intervention and increase intensity as needed
- These approaches improve flexibility, cover comorbidity, allow individualization
Future directions:
Identify moderators à to know for whom treatments work best (e.g., age, gender, motivation)
Identify mediators à to understand how treatments work (e.g., therapeutic alliance, self-efficacy)
These are essential for:
- Understanding mechanisms of change
- Making therapies more targeted and personalized