Vision-Related Quality of Life and Patient-Reported Outcomes in Ophthalmology

From EyeWiki

All content on Eyewiki is protected by copyright law and the Terms of Service. This content may not be reproduced, copied, or put into any artificial intelligence program, including large language and generative AI models, without permission from the Academy.

All contributors:
Assigned editor:
Review:
Assigned status Update Pending
.


Vision‑related quality of life (VRQoL) and patient‑reported outcomes (PROs; measured with patient‑reported outcome measures, PROMs) capture how ocular disease and its treatment affect daily activities such as reading, driving, and mobility; subjective symptoms such as eye discomfort, photophobia, and diplopia; and and psychosocial well‑being. PROMs complement traditional clinical metrics (e.g. visual acuity, visual fields, and anatomic imaging) and can inform shared decision-making, quality improvement, and coverage decisions/health technology assessments.[1] This article provides an overview of commonly used ophthalmic PROMs, core psychometric concepts (such as reliability, validity, and responsiveness), interpretation, and practical considerations for research and clinical care.

Definitions

Vision‑related quality of life (VRQoL): The subjective impact of vision on functioning and well‑being (e.g., reading, driving, mobility, social and role functioning, emotional health). In ophthalmology, widely used cross‑condition instruments include the National Eye Institute Visual Function Questionnaire (NEI‑VFQ‑25), the Impact of Vision Impairment (IVI) questionnaire, and the Low Vision Quality‑of‑Life (LVQOL) questionnaire.[2][3][4][5]

Patient‑reported outcome (PRO): A report of a patient’s health status that comes directly from the patient, without interpretation by a clinician or anyone else.[6][7]

Patient‑reported outcome measure (PROM): A validated questionnaire (with scoring documentation) designed to collect PRO data for a specific concept (e.g., vision‑related functioning, symptoms, health‑related quality of life).[8]

Types of PROMs

  • Generic health-related quality-of-life (HRQoL) instruments (e.g., EQ-5D, SF-36) allow comparisons across diseases and populations and can generate utility values for cost-effectiveness analyses (e.g., quality-adjusted life years [QALYs]). They are short, widely translated, and useful when payers or health-economics endpoints are important. However, because items are not vision-specific (e.g., they do not directly ask about glare, contrast, or driving), they are often less sensitive to clinically meaningful changes in ophthalmic conditions; many studies therefore pair a generic instrument with a vision-specific or disease-specific measure.[9]
  • Vision-specific instruments (e.g., NEI-VFQ-25; IVI; LVQOL) target how vision affects day-to-day function and participation. Domains typically include near and distance activities (reading, face recognition), mobility and peripheral vision, social and role functioning, and emotional well-being. These tools are more likely than generic HRQoL measures to detect changes relevant to patients with eye disease and can be used across diagnoses to compare impact or to complement clinical endpoints in trials. Many have undergone modern psychometric work (e.g., Rasch analysis, see below) to improve measurement properties, scoring, and cross-cultural use.[10][11]
  • Disease-specific instruments focus on a particular condition or symptom domain and are often the most responsive to clinical change in that condition. Examples include Ocular Surface Disease Index (OSDI) for dry eye (symptoms and vision-related function), Catquest-9SF/Cat-PROM5 for cataract (activity limitation and visual task difficulty), GQL-15 for glaucoma (difficulties with dark adaptation, glare, and peripheral vision), AS-20 and the Diplopia Questionnaire for adult strabismus/diplopia (psychosocial and functional impact; diplopia frequency/severity), and PedEyeQ in pediatrics (child self-report, proxy-for-child, and parent impact forms). Disease-specific measures are well-suited for routine care pathways and condition-focused trials, with the trade-off that results are less comparable across unrelated diseases than vision-specific or generic tools.[12][13][14][15][16][17]

Concepts and Psychometrics

Understanding how PROMs are built and evaluated helps clinicians choose instruments that are reliable, valid, responsive to change, and interpretable in practice and research. The ideas/measurement properties below follow widely used frameworks in outcomes measurement and reporting.[18][19][20]

Concept of Interest

In PRO measurement, the concept of interest, or simply concept, is the specific symptom, function, or quality-of-life domain measured by an instrument (e.g., ocular dryness, glare, reading ability, driving). Clear definition of the COI guides item selection, scoring, and how results are interpreted in clinic or trials.[21][22]

Recall

The recall period is the time window patients are asked to consider when answering. It should match how quickly the concept changes and the follow-up schedule. For example, the OSDI uses a 1-week recall to capture fluctuating dry-eye symptoms,[23] whereas many NEI-VFQ-25 items ask about the past month to reflect day-to-day functioning over a longer period.[24] Some tools are current-state (no recall), such as the Diplopia Questionnaire that rates diplopia at the time of assessment.[25] Keep the recall period consistent across visits; mismatches can obscure true change or exaggerate variability.[26]

Reliability

Reliability reflects the degree to which a PROM yields consistent results when the underlying construct has not changed.

  • Internal consistency asks whether items that are meant to measure the same concept tend to move together. It is usually summarized with Cronbach’s α (or, in Rasch models, the person-separation index). As a rule of thumb, values around 0.70–0.90 suggest the items “hang together” well; very low values imply the items may be measuring different things, while very high values (>0.95) can indicate redundancy.
  • Test–retest reliability evaluates stability over time when no clinical change is expected (often quantified by intraclass correlation coefficients). High stability supports using the measure to detect true change rather than measurement noise.[27]

Validity and Responsiveness

Validity concerns whether a PROM measures what it is intended to measure; responsiveness concerns whether it can detect meaningful clinical change.

  • Content validity indicates that items comprehensively and appropriately cover the target concept (e.g., vision-related function) for the intended population and context of use.[28]
  • Construct validity checks whether scores behave the way we expect. For example, worse scores on a vision-specific PROM should correlate with worse visual function (convergent validity), show little or no correlation with unrelated constructs (divergent validity), and distinguish between groups that are clinically different (e.g., advanced vs early disease; known-groups validity).
  • Criterion validity refers to agreement with a true “gold standard.” For most PROMs there is no gold standard, so criterion validity is rarely the focus; instead, content and construct validity carry more weight in ophthalmology.[29]
  • Responsiveness is the ability to detect change when change has occurred. A practical example is the NEI-VFQ-25: early work in neovascular AMD suggested that a change of about 4–6 points mirrored a 15-letter (three-line) best-corrected visual acuity (BCVA) gain, while more recent analyses in diabetic macular edema support ~3–5 points as a clinically meaningful improvement on the composite score. In other words, very small shifts may be noise, whereas changes in these ranges are more likely to matter to patients.[30][31]
  • Floor and ceiling effects occur when many respondents score at the very bottom or top of the scale. This makes it hard to see deterioration (at the floor) or improvement (at the ceiling). As a rule of thumb, if a substantial proportion of patients cluster at either extreme, the instrument may struggle to detect change; choosing tools with items well matched to your patients’ ability levels helps avoid this problem.[32]

Measurement Models: What Clinicians Need to Know

Most legacy PROMs were built using classical test theory (CTT), where a patient’s total score is simply the sum or average of item responses. Under CTT, reports usually include internal consistency (e.g., Cronbach’s α) and test–retest reliability. This approach is adequate for routine use, but two limits are worth remembering: first, total scores can blend more than one underlying concept (for example, visual function and emotional impact), and second, scores behave like ordinal ranks, meaning that a 5-point change near the bottom of the scale may not mean the same thing as a 5-point change near the top.[33][34]

Newer work increasingly uses Rasch/item-response theory (IRT) models. These models incorporate how “much” of the trait a person has (e.g., vision-related function) and how “difficult” each item is. The resulting scores are approximately interval-scaled (i.e., equal steps), which makes before/after differences easier to interpret and supports standard analyses. Rasch/IRT also provides practical diagnostics: item fit (flags items that do not behave as expected), targeting (whether items are too easy or too hard for your clinic population), and differential item functioning (DIF) (whether items work differently across languages, cultures, ages, or sexes).[35]

For clinicians, the practical takeaways are straightforward. Use the official scoring for the instrument you select and state which version you used; for example, the original NEI-VFQ-25 composite and Rasch-recalibrated scoring (e.g., NEI-VFQ-25C) are not interchangeable and may emphasize different domains (visual function vs psychosocial).[36] When a validated Rasch-scaled version exists (e.g., Catquest-9SF in cataract), prefer it for better targeting and interval scaling.[37] Finally, avoid direct comparisons across studies that used different scoring methods (CTT vs Rasch), and check whether your chosen instrument has undergone validated translation in the language you plan to use when treating multilingual populations.[38]

Interpretability

It's essential that score changes are not only statistically significant but also meaningful to patients. Three practical ideas help with interpretation.

  • The minimal clinically important difference (MCID/MID) is the smallest change patients are likely to notice or value. The most useful estimates anchor changes in a PROM score to an external criterion (e.g., a change in best-corrected visual acuity [BCVA]).
  • The minimal detectable change (MDC), also called the smallest detectable change (SDC), is the smallest score difference beyond measurement error, at a chosen confidence level (usually 95%). Unlike MCID, MDC does not ask whether the change matters to patients; it tells you whether a change is likely to be real. Changes smaller than the MDC may reflect noise rather than true improvement or worsening.[39][40]
  • Differential item functioning (DIF) checks whether items work the same way across different groups (e.g., languages, cultures, ages, sexes). Identifying and addressing DIF supports valid cross-group comparisons and adaptations.[41]

In routine practice and research, specify how you will judge meaningful change (e.g., MCID thresholds), confirm that the instrument’s reliability/validity are established in your target population, and note any floor/ceiling issues that could limit detection of change.

Clinical Application

Use Cases

  • In clinical trials, PRO endpoints can quantify symptom relief or functional benefit alongside visual acuity or field measures and may be considered in labeling claims when developed rigorously.[42][43]
  • In routine care and registries, PROMs help track patient-important outcomes over time and support benchmarking; for example, the Swedish National Cataract Register has captured the Rasch-scaled Catquest-9SF across >10 years to monitor surgical results and population trends.[44] Implementation studies in ophthalmology also highlight clinicians’ interest in using PROMs to guide conversations and decisions in clinic, while noting practical barriers that can be mitigated with electronic capture and clear presentation of results.[45]
  • For quality improvement (QI) and benchmarking, repeated PROM collection can identify service gaps (e.g., persistent glare after cataract surgery despite good acuity) and inform program-level changes.[46]
  • For cost-effectiveness work, use a utility score (0–1) that summarizes how vision affects overall health (e.g., the Visual Function Questionnaire - Utility Index (VFQ-UI), derived from the NEI-VFQ-25) to calculate QALYs.[47]

Selecting an Instrument

  1. Start with purpose. What do you need the PROM to do: support a trial endpoint, guide routine care, drive quality improvement, or inform cost-utility work? Pick the tool to match the job.
  2. Choose the concepts of interest: In ophthalmology, this may include symptoms (e.g., dryness, glare), function (reading, mobility, driving), broader quality of life, or a utility (0–1) for cost-effectiveness.
  3. Match the population. Consider age (child vs adult forms), literacy, and language. Use validated translations where available and note the reporter (self vs proxy in pediatrics).[48]
  4. Minimize burden. Shorter tools with clear wording are easier to implement. Aim for completion in a few minutes and decide when you will collect them (baseline and relevant follow-ups).
  5. Choose the mode. Paper or ePRO (see below) for self-completion; interviewer-administered if needed. NEI-VFQ-25 has both formats; offer large-print or audio assistance for low vision.[49][50]
  6. Check measurement quality. Use instruments with evidence of reliability, validity, and responsiveness in your condition. When available, prefer validated Rasch-scaled scoring (e.g., Catquest-9SF; NEI-VFQ-25C) for more interpretable change over time.[51][52]
  7. Plan interpretation. Decide up front how you will judge meaningful change (MCID/MID), how to handle floor/ceiling effects, and how PROMs will inform decisions in clinic or trials.[53]
  8. Confirm licensing/permissions. Some tools are free to use (e.g., NEI-VFQ-25 manual), others require permission (e.g., OSDI; some glaucoma instruments). Check rights for use and translation before deployment.
  9. Document clearly. In manuscripts or QI reports, name the instrument, version, language/translation, mode of administration (see below), and scoring algorithm.

Modes of Administration

PROMs can be completed as self-administered paper or electronic forms (ePRO), or via interviewer-administered formats. The NEI-VFQ-25 provides both interviewer and self-administered versions, with manuals and forms publicly available.[54] Electronic capture can reduce missing data, time-stamp entries, and facilitate dashboards that combine PROM results with clinical data; regulatory guidance supports ePRO in trials when content and equivalence are preserved.[55] For accessibility, offer large-print or audio/assisted options when literacy or vision limits self-completion; document the reporter and mode, as scores may differ between self- and interviewer-administered formats.[56]

Consent, Capacity, and Assistance

PROMs should reflect the patient’s own voice wherever possible. In clinical care and research, document who answered (patient, proxy, or interviewer), any assistance provided, and the language/translation used.

Adults

  • Prefer self-report when the patient has capacity to understand the questions and provide answers. Assistance may include reading items aloud, enlarging font, or using audio while avoiding leading explanations. Interviewers should follow standardized scripts (e.g., NEI-VFQ interviewer version) and avoid interpreting items for the patient.[57]
  • If the patient lacks capacity (e.g., acute delirium, advanced dementia) or cannot communicate, consider an observer-reported outcome (ObsRO) completed by a knowledgeable caregiver for observable behaviors (mobility, task performance). A caregiver-completed form that substitutes for the patient’s voice is a proxy report; by FDA definition, proxy reports are not PROs and should be labeled as such in methods and results.[58][59]
  • In routine care, PROM completion is typically part of clinical documentation, but local policy may require notification. In research, obtain informed consent/assent per IRB requirements and specify the reporter, mode, and translation in the protocol and case-report forms.[60]

Accessibility and assistance

  • Provide large-print/high-contrast versions, screen-reader compatible ePROs, or audio administration for patients with low vision or low literacy; record that assistance was used. Mode can influence scores, so keep mode consistent across visits when possible.[61][62]

Children and adolescents

  • Use age-appropriate self-report whenever feasible (many instruments support reliable self-report from ~8 years onward), accompanied by proxy/parent forms when recommended by the instrument.[63]
  • For younger children or those unable to self-report, use validated proxy versions and clearly identify the reporter (parent/caregiver). Proxy and child scores may differ; report them separately rather than substituting one for the other.[64]
  • When using youth versions of generic utilities (e.g., EQ-5D-Y) alongside vision-specific PROMs, follow the instrument’s guidance on reporters and value sets.

Summary of Validated PROMs in Ophthalmology

Disease / Use case Full name Abbreviation Concepts of Interest / primary clinical features measured Items / recall Scoring Notes Key references
Cross-condition (vision-specific) National Eye Institute Visual Function Questionnaire–25 (legacy) NEI-VFQ-25 Vision-related activities (near/distance), driving, social/role function, ocular pain, peripheral vision, color vision 25; ~past month Composite and subscale scores (0–100; higher = better) Self- or interviewer-administered; widely used across eye diseases [65]
Cross-condition (vision-specific, Rasch) NEI-VFQ-25 (Rasch-recalibrated) NEI-VFQ-25C Visual function and socio-emotional impact (reported separately) 25; ~past month Rasch-scaled scores (interval-like); separate Visual Function and Socio-emotional subscales Not interchangeable with legacy composite; improved targeting/interpretability [66]
Cross-condition (utility) Visual Function Questionnaire-Utility Index VFQ-UI Preference-based health utility for QALYs (derived from 6 VFQ items) 6; same as item stems Utility 0–1 (higher = better) Built for cost-effectiveness analyses in eye care [67]
Generic HRQoL (cross-disease) EuroQol-5 Dimension EQ-5D Mobility, self-care, usual activities, pain/discomfort, anxiety/depression (utilities) 5; today Utility 0–1 (higher = better) Often paired with vision-specific PROMs; may be less sensitive to eye-specific change [68]
Generic HRQoL (cross-disease) Short Form-36 / Short Form-12 SF-36 / SF-12 Eight domains of physical and mental health 36 or 12; 4-week recall Domain and summary scores (higher = better) Enables cross-condition comparisons; complement to vision-specific tools [69]
Cornea (dry eye syndrome) Ocular Surface Disease Index OSDI Dryness, irritation, photophobia; vision-related function 12; 1-week recall 0–100 (higher = worse); 0–12 normal, 13–22 mild, 23–32 moderate, 33–100 severe Widely used in clinic/trials; permission required for use [70]
Cornea (dry eye syndrome) 5-Item Dry Eye Questionnaire DEQ-5 Watery eye, discomfort, dryness (frequency) and late-day discomfort/dryness (intensity) 5; 1-week recall 0–22 (higher = worse); screening cutoffs commonly ≥6 for DED and ≥12 for suspected Sjögren’s Brief screener; widely used in clinic/epidemiology [71]
Cornea (dry eye syndrome) Standard Patient Evaluation of Eye Dryness SPEED Frequency/severity of dryness, irritation, fatigue; drop use 8; current + past 72 h + past 3 mo recall anchors 0–28 (higher = worse) Validated; correlates with meibomian gland signs [72]
Cornea (dry eye syndrome) Symptom Assessment in Dry Eye SANDE Dry eye frequency and severity (two VAS) 2; current state Two 100-mm VAS combined to summary score (higher = worse) Ultra-brief; tracks symptoms; aligns reasonably with OSDI [73][74]
Cornea (dry eye syndrome) Impact of Dry Eye on Everyday Life IDEEL Symptom bother, daily activities, treatment satisfaction 57; past 2 wks (typ.) Domain scores 0–100 (direction per manual) Comprehensive dry-eye QoL; PRO developed to FDA standards [75]
Cornea (dry eye syndrome) Ocular Comfort Index OCI Ocular surface irritation/comfort (frequency & severity) 12; 1-week recall Rasch-scaled (interval-like); higher typically = more discomfort Designed with Rasch; good repeatability [76]
Cornea (dry eye syndrome) Dry Eye-Related Quality-of-Life Score DEQS Symptoms and impact on daily life (incl. mental health) 15; 1-week recall 0–100 summary (higher = worse) Validated in Japan; translations available [77]
Cornea (contact lens) Contact Lens Dry Eye Questionnaire–8 CLDEQ-8 Dryness/ocular discomfort in soft CL wearers 8; recent symptoms 0–37 (higher = worse); responsive to change Common in CL research/clinical monitoring [78]
Cornea (keratoconus) Keratoconus Outcomes Research Questionnaire KORQ Activity limitation and keratoconus-specific symptoms Two Rasch scales; no fixed recall Rasch-scaled summary scores (higher = worse difficulty) Designed specifically for keratoconus; responsive to treatment [79]
Refractive Quality of Vision Questionnaire QoV Glare, halos, starbursts—frequency, severity, bothersomeness Short; current state Three subscales reported; higher values indicate more symptoms Sensitive to optical symptoms after refractive or cataract surgery [80]
Refractive NEI Refractive Error Quality-of-Life–42 NEI-RQL-42 Refractive correction satisfaction, visual symptoms, dependence on correction 42; past month 13 subscales; scoring per manual Used in refractive outcomes; psychometric critiques exist [81][82]
Cataract Catquest–9 item Short Form Catquest-9SF Activity limitation and visual task difficulty due to cataract 9; no fixed recall Rasch-scaled summary score (interval-like; higher = more difficulty) Used in national registries; highly responsive to surgery [83]
Cataract (legacy) Visual Function Index–14 VF-14 Difficulty across 14 vision-dependent activities 14; no fixed recall 0–100 (higher = better function) Historical benchmark in cataract outcomes [84]
Glaucoma (function) Glaucoma Quality of Life–15 GQL-15 Difficulties with dark adaptation, glare, stairs, peripheral vision 15; no fixed recall Total score (higher = worse difficulty) Validated in multiple languages; correlates with field loss [85]
Glaucoma (symptoms) Glaucoma Symptom Scale GSS Burning, tearing, photophobia, blurred vision, etc. ~10–15; symptom-focused Symptom and function subscales (direction per manual) Brief glaucoma-specific symptom index [86]
Oculoplastics (thyroid eye disease) Graves’ Ophthalmopathy Quality of Life GO-QOL Visual functioning and appearance-related impact in TED 16; no fixed recall Two subscale scores: Visual Function and Appearance (higher = better) Widely used in TED studies; many translations [87]
Oculoplastics (thyroid eye disease) Thyroid Eye Disease Quality of Life TED-QOL Brief disease-specific QoL for TED Very brief; current state Summary score (direction per manual) Quick clinic screen; complements GO-QOL [88]
Strabismus (adult) Adult Strabismus-20 AS-20 Psychosocial impact and functional difficulties in adult strabismus 20; no fixed recall Psychosocial and Function subscales (higher = better function) Responsive to surgery; Rasch refinements published [89]
Diplopia Diplopia Questionnaire DQ Diplopia frequency/severity across gaze positions and activities Short; current state 0–100 (higher = worse diplopia) Reliable and responsive for quantifying diplopia burden [90]
Pediatrics (condition-agnostic) Pediatric Eye Questionnaire (child, proxy-for-child, parent) PedEyeQ Functional vision and eye-related QoL in children; parental impact Age-specific forms; no fixed recall Rasch-developed subscales (direction per manual) Validated across pediatric eye conditions [91]
Pediatrics (intermittent exotropia) Intermittent Exotropia Questionnaire IXTQ HRQoL impact of intermittent exotropia (child/proxy/parent) Child & proxy forms; no fixed recall Total/subscale scores (Rasch versions available) Widely used in PEDIG studies [92]
Pediatrics (visual function) Children’s Visual Function Questionnaire CVFQ Vision-related function and QoL in young children (parent-reported) 35 (common version); no fixed recall Subscale and total scores (direction per manual) Early validated pediatric VRQoL instrument [93]
Pediatrics (visual ability) Cardiff Visual Ability Questionnaire for Children CVAQC Visual ability in everyday tasks (school/home) ~25; child/proxy formats Rasch-scaled scores (higher = better ability) Useful for pediatric low vision [94]
Pediatrics (low vision) L. V. Prasad–Functional Vision Questionnaire LVP-FVQ / LVP-FVQ II Functional vision in school-age children with visual impairment ~20 (V1) / 23 (V2); interview Rasch-scaled scores; orientation per manual Developed/validated in India; cross-cultural potential [95][96]
Pediatrics (amblyopia treatment burden) Amblyopia Treatment Index ATI Parent-reported burden of patching/atropine (adverse effects, compliance, social impact) 18; recent treatment period Three subscales 0–100 (higher = more burden) Used extensively in PEDIG trials [97]
Low vision / rehab Impact of Vision Impairment (28-item) / Brief IVI (15-item) IVI / Brief IVI Participation, mobility, emotional well-being in low vision 28 or 15; no fixed recall Rasch-scaled total/subscale scores (higher = worse difficulty in many versions) Sensitive to functional limits in low vision; cross-cultural use [98][99]
Low vision / rehab Low Vision Quality of Life Questionnaire LVQOL Daily activities and QoL in low vision ~25; no fixed recall Total and subscale scores (direction varies by scoring) Original development with Rasch validations [100][101]
Low vision / rehab Veterans Affairs Low-Vision Visual Functioning Questionnaire–48 VA LV VFQ-48 Difficulty with daily visual activities in low vision 48; no fixed recall Rasch visual-ability measure (higher = better) Sensitive across moderate–severe impairment [102]
Low vision / rehab Activity Inventory (Massof) AI Goal/task-oriented visual function and participation Adaptive item bank; no fixed recall Rasch person measure (visual ability; higher = better) Widely used outcome in low-vision rehab programs [103]
Digital eye strain Computer Vision Syndrome Questionnaire CVS-Q Burning, itching, blurred vision, headache, photophobia with device use 16; recent symptoms Weighted score; ≥6 indicates CVS (higher = worse) Validated in workers; multiple language versions [104]

Abbreviations: QoL = quality of life; VRQoL = vision-related quality of life; HRQoL = health-related quality of life; QALY = quality-adjusted life year; TED = thyroid eye disease; PROM = patient-reported outcome measure; PEDIG = Pediatric Eye Disease Investigator Group

Subscale = a summary score for a subset of related items (e.g., symptoms or activity limitations) within an instrument.

Licensing & Permissions

PROMs are intellectual property. Before use — especially in trials, registries, or electronic formats — verify the current terms for use, scoring, translation, and electronic migration. In general, do not alter item wording, response options, or scoring without permission; doing so can invalidate results and violate copyright or licensing agreements.[105][106]

Common Licensing Categories

  • Publicly available / free with citation. Materials are available to download and use with attribution; modification is discouraged. Example: NEI-VFQ-25 manual and forms from RAND (self- and interviewer-administered versions).[107]
  • Academic use permitted; permission required for commercial or translation. Many instruments allow no-fee academic/clinical use but require permission (and sometimes a fee) for industry use or new translations (usually coordinated via the instrument holder or a repository such as Mapi/ePROVIDE). Examples: PedsQL family of measures; several glaucoma/oculoplastics tools.[108]
  • License/permission required for all uses. Rights holders require formal permission; do not reprint items in publications without approval. Example: OSDI (rights held by AbbVie/Allergan; permissions commonly coordinated via ePROVIDE).[109]
  • Rasch-recalibrated scoring or short forms. Some instruments have updated scoring (e.g., NEI-VFQ-25C) or Rasch short forms (e.g., Catquest-9SF). Use the official algorithms and cite the specific version used; do not mix legacy and Rasch scores within an analysis set.[110][111]

Translations and Cultural Adaptation

Using a PROM in another language typically requires both permission from the rights holder and a documented translation/cultural-adaptation process (forward/back translation, expert review, cognitive debriefing with patients, and proofreading). Whenever possible, use an existing validated translation from the rights holder or repository.[112][113]

Electronic Administration

Moving a paper PROM to an electronic format is considered a migration and may need permission and equivalence testing, depending on the extent of layout or navigation changes. Use official electronic versions if provided by the rights holder. For new ePRO builds, follow established guidance (e.g., usability testing; evidence of score equivalence) and retain screenshots/version history in the study file.[114]

Regulatory & Reporting Standards

For study design and transparent reporting, clinicians and researchers should cite a set of consensus standards. Use COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) and ISOQOL (International Society for Quality of Life Research) to justify instrument selection and scoring;[115][116][117] in trials, plan PRO endpoints with SPIRIT-PRO (Standard Protocol Items: Recommendations for Interventional Trials—Patient-Reported Outcomes) and report them with CONSORT-PRO (Consolidated Standards of Reporting Trials—PRO);[118][119] align regulated studies with the FDA’s PFDD (Patient-Focused Drug Development) guidance for clinical outcome assessments, and follow good practice for ePRO migration/equivalence;[120][121] when using translations, document permissions and the ISPOR (International Society for Pharmacoeconomics and Outcomes Research) translation/cultural-adaptation process (forward/back translation and cognitive debriefing).[122] In all reports, include at minimum: instrument full name/abbreviation, version, language/translation, mode and reporter, recall period, scoring approach, handling of missing data, and thresholds for meaningful change.[123][124]

Limitations

PROMs add clinically useful information, but they have important limits that should be considered when selecting, administering, and interpreting instruments.

Measurement-Related

PROM scores can show floor/ceiling effects (many patients at the best or worst possible score), which reduces the ability to detect deterioration or improvement. Estimates of meaningful change (e.g., MCID/MID) are context-specific and may not transfer across diseases or severities; for example, NEI-VFQ-25 change thresholds differ between neovascular AMD and diabetic macular edema cohorts.[125][126] Recall periods (e.g., 1-week vs 1-month) can introduce recall bias and make comparisons difficult if not kept consistent across visits. Finally, patients can experience response shift (changes in internal standards/priorities over time), which alters how the same symptom burden is scored even when clinical measures are stable; this complicates longitudinal interpretation and trial endpoints.[127][128]

Instrument and Scoring Issues

Legacy scoring and modern Rasch-calibrated scoring are not interchangeable and may emphasize different domains (e.g., visual function vs socioemotional), so cross-study comparisons can be misleading if methods differ.[129] Translations require permission and formal cultural adaptation; ad-hoc translations risk measurement error and bias.[130] Moving a paper PROM to an electronic (ePRO) format is a migration that may need permission and evidence of score equivalence; layout changes can affect responses.[131] In addition, many PROMs are designed for self-report; proxy reports (e.g., caregiver) are useful in some contexts but are not PROs under FDA definitions and can diverge from patient experience.[132]

Implementation and Workflow

Clinic adoption can be limited by time, staff training, and integration into the EHR/registry workflow. Missing data, partial completion, and mode effects (paper vs tablet vs interviewer) reduce data quality if not planned for. Presenting results in a way that is actionable to clinicians (e.g., flagging large changes, linking to likely causes such as glare or diplopia) remains a common challenge in implementation studies.[133][134]

Generalizability and confounding

PROM scores can be influenced by factors outside the target eye condition: bilateral vs unilateral disease, comorbid ocular conditions (e.g., ocular surface disease in glaucoma), non-ocular health, mental health, and social context. Generic HRQoL tools such as EQ-5D can be insensitive to vision-specific change, while vision-specific tools may not reflect broader health gains—each should be chosen to fit the decision at hand.[135][136]

Overall, PROMs should be interpreted alongside clinical measures (e.g., BCVA, visual fields, imaging) and administered with consistent recall periods, modes, and scoring.

See Also

References

  1. Denniston AK, Kyte D, Calvert M, Burr JM. An introduction to patient‑reported outcome measures in ophthalmic research. Eye (Lond). 2014;28(6):637‑645. doi:10.1038/eye.2014.41.
  2. RAND Health Care. NEI‑VFQ‑25 Manual. Available at: https://www.rand.org/content/dam/rand/www/external/health/surveys_tools/vfq/vfq25_manual.pdf
  3. National Eye Institute. Visual Function Questionnaire‑25 (VFQ‑25). Available at: https://www.nei.nih.gov/learn-about-eye-health/outreach-resources/outreach-materials/visual-function-questionnaire-25
  4. Lamoureux EL, Pallant JF, Pesudovs K, et al. The Impact of Vision Impairment Questionnaire: an evaluation of its measurement properties using Rasch analysis. Invest Ophthalmol Vis Sci. 2006;47(11):4732‑4741.
  5. Wolffsohn JS, Cochrane AL. Design of the Low Vision Quality‑of‑Life Questionnaire (LVQOL) and measuring the outcome of low‑vision rehabilitation. Br J Ophthalmol. 2000;84(9):1035‑1040.
  6. U.S. Food and Drug Administration. Patient‑Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. 2009. Available at: https://www.fda.gov/media/77832/download
  7. Denniston AK, Kyte D, Calvert M, Burr JM. An introduction to patient‑reported outcome measures in ophthalmic research. Eye (Lond). 2014;28(6):637‑645. doi:10.1038/eye.2014.41.
  8. U.S. Food and Drug Administration. Patient‑Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. 2009. Available at: https://www.fda.gov/media/77832/download
  9. Macedo AF, Marques S, Pereira da Silva D, Ramos PL, Haas P. Predictors of problems reported on the EQ-5D-3L dimensions for patients with ophthalmic disease. BMC Ophthalmol. 2022;22:368. doi:10.1186/s12886-022-02579-7.
  10. Lamoureux EL, Pallant JF, Pesudovs K, Hassell JB, Keeffe JE. The Impact of Vision Impairment Questionnaire: an evaluation of its measurement properties using Rasch analysis. Invest Ophthalmol Vis Sci. 2006;47(11):4732-4741.
  11. Mylona I, Aletras V, Ziakas N, Tsinopoulos I. Rasch Validation of the LVQOL Scale. Acta Medica (Hradec Kralove). 2021;64(2):108-120. doi:10.14712/18059694.2021.19.
  12. Schiffman RM, Christianson MD, Jacobsen G, Hirsch JD, Reis BL. Reliability and validity of the Ocular Surface Disease Index. Arch Ophthalmol. 2000;118(5):615-621. doi:10.1001/archopht.118.5.615.
  13. Lundström M, Pesudovs K. Catquest-9SF patient outcomes questionnaire: nine-item short-form Rasch-scaled revision. J Cataract Refract Surg. 2009;35(3):504-513. doi:10.1016/j.jcrs.2008.11.038.
  14. Nelson P, Aspinall P, Papasouliotis O, Worton B, O’Brien C. Quality of life in glaucoma and its relationship with visual field loss. Eye (Lond). 2003;17(4):544-551.
  15. Hatt SR, Leske DA, Bradley EA, Cole SR, Holmes JM. Development of a quality-of-life questionnaire for adults with strabismus. Ophthalmology. 2009;116(1):139-144. doi:10.1016/j.ophtha.2008.08.050.
  16. Holmes JM, Liebermann L, Hatt SR, Smith SJ, Leske DA. Quantifying diplopia with a questionnaire. Ophthalmology. 2013;120(7):1492-1496. doi:10.1016/j.ophtha.2012.12.032.
  17. Leske DA, Hatt SR, Yamada T, et al. Validation of the Pediatric Eye Questionnaire (PedEyeQ) in children with visual impairment. Am J Ophthalmol. 2019;207:66-73. doi:10.1016/j.ajo.2019.06.008.
  18. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  19. Mokkink LB, Elsman EBM, Terwee CB. COSMIN guideline for systematic reviews of patient-reported outcome measures version 2.0. Qual Life Res. 2024;33(11):2929-2939. doi:10.1007/s11136-024-03761-6.
  20. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  21. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Guidance for Industry. 2009.
  22. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  23. Schiffman RM, Christianson MD, Jacobsen G, Hirsch JD, Reis BL. Reliability and validity of the Ocular Surface Disease Index. Arch Ophthalmol. 2000;118(5):615-621. doi:10.1001/archopht.118.5.615.
  24. Mangione CM, Lee PP, Gutierrez PR, et al. Development of the 25-item National Eye Institute Visual Function Questionnaire. Arch Ophthalmol. 2001;119(7):1050-1058. doi:10.1001/archopht.119.7.1050.
  25. Holmes JM, Liebermann L, Hatt SR, Smith SJ, Leske DA. Quantifying diplopia with a questionnaire. Ophthalmology. 2013;120(7):1492-1496. doi:10.1016/j.ophtha.2012.12.032.
  26. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  27. Mokkink LB, Elsman EBM, Terwee CB. COSMIN guideline for systematic reviews of patient-reported outcome measures version 2.0. Qual Life Res. 2024;33(11):2929-2939. doi:10.1007/s11136-024-03761-6.
  28. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  29. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  30. Suñer IJ, Kokame GT, Yu E, Ward J, Dolan C, Saperstein DA. Responsiveness of NEI VFQ-25 to changes in visual acuity in neovascular AMD: validation studies from two phase 3 clinical trials. Invest Ophthalmol Vis Sci. 2009;50(8):3629-3635. doi:10.1167/iovs.08-2910.
  31. Bressler NM, Su W, Maguire MG, et al. Clinically meaningful change estimates for the NEI VFQ-25 in patients with diabetic macular edema. Transl Vis Sci Technol. 2024;13(12):27. doi:10.1167/tvst.13.12.27.
  32. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  33. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  34. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  35. Gothwal VK, Wright TA, Lamoureux EL, Pesudovs K. Rasch analysis of the Quality of Life and Vision Function Questionnaire (QOL-VFQ). Optom Vis Sci. 2009;86(7):E836-E844. doi:10.1097/OPX.0b013e3181ae1ec7.
  36. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C: calibrating items in the National Eye Institute Visual Function Questionnaire-25 to enable comparison of outcome measures. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  37. Lundström M, Pesudovs K. Catquest-9SF patient outcomes questionnaire: nine-item short-form Rasch-scaled revision. J Cataract Refract Surg. 2009;35(3):504-513. doi:10.1016/j.jcrs.2008.11.038.
  38. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  39. van Kampen DA, Willems WJ, van Beers LWAH, Castelein RM, Scholtes VAB. Determination and comparison of the smallest detectable change (SDC) and the minimal important change (MIC) in PROMs. J Orthop Sports Phys Ther. 2013;43(8):549-558. doi:10.2519/jospt.2013.4507.
  40. Seamon BA, Beard DJ. Revisiting the concept of minimal detectable change for patient-reported outcomes. Arch Phys Med Rehabil. 2022;103(10):1976-1982. doi:10.1016/j.apmr.2022.04.017.
  41. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C: calibrating items in the National Eye Institute Visual Function Questionnaire-25 to enable comparison of outcome measures. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  42. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Guidance for Industry. 2009.
  43. U.S. Food and Drug Administration. Patient-Focused Drug Development: Selecting, Developing, or Modifying Fit-for-Purpose Clinical Outcome Assessments. Guidance for Industry. 2022.
  44. Lundström M, Pesudovs K, \u200eOhlsson B, et al. Catquest-9SF functioning over a decade—findings from the Swedish National Cataract Register. Eye and Vision. 2020;7:54. doi:10.1186/s40662-020-00220-4.
  45. Robertson AO, Tadić V, Rahi JS. Attitudes, experiences, and preferences of ophthalmic professionals regarding routine use of patient-reported outcome measures in clinical practice. PLoS One. 2020;15(12):e0243563. doi:10.1371/journal.pone.0243563.
  46. Braithwaite T, Calvert M, Gray A, Pesudovs K, Denniston AK. The use of patient-reported outcome research in modern ophthalmology: impact on clinical trials and routine clinical practice. Patient Relat Outcome Meas. 2019;10:9-24. doi:10.2147/PROM.S162802.
  47. Rentz AM, Kowalski JW, Walt JG, et al. Development of a preference-based index from the National Eye Institute Visual Function Questionnaire-25. JAMA Ophthalmol. 2014;132(3):310-318. doi:10.1001/jamaophthalmol.2013.7639.
  48. Leske DA, Hatt SR, Yamada T, et al. Validation of the Pediatric Eye Questionnaire (PedEyeQ) in children with visual impairment. Am J Ophthalmol. 2019;207:66-73. doi:10.1016/j.ajo.2019.06.008.
  49. RAND Health Care. Visual Function Questionnaire (VFQ-25): Manual and instruments. 2000.
  50. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome data electronically in clinical trials. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  51. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of PROMs. Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  52. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  53. Bressler NM, Su W, Maguire MG, et al. Clinically meaningful change estimates for the NEI VFQ-25 in DME. Transl Vis Sci Technol. 2024;13(12):27. doi:10.1167/tvst.13.12.27.
  54. Rand Health Care. Visual Function Questionnaire (VFQ-25): Manual and instruments. 2000. Accessed September 1, 2025.
  55. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome (PRO) data electronically: the past, present, and promise of ePRO measurement in clinical trials. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  56. Rand Health Care. VFQ-25 Manual. 2000. Accessed September 1, 2025.
  57. RAND Health Care. Visual Function Questionnaire (VFQ-25): Manual and instruments. 2000.
  58. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Guidance for Industry. 2009.
  59. U.S. Food and Drug Administration. Patient-Focused Drug Development: Selecting, Developing, or Modifying Fit-for-Purpose Clinical Outcome Assessments. Guidance for Industry. 2022.
  60. International Society for Quality of Life Research (ISOQOL). User’s Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice. Version 2. 2015.
  61. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome data electronically in clinical trials: the past, present, and promise of ePRO. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  62. RAND Health Care. VFQ-25 Manual. 2000.
  63. Leske DA, Hatt SR, Yamada T, et al. Validation of the Pediatric Eye Questionnaire (PedEyeQ) in children with visual impairment. Am J Ophthalmol. 2019;207:66-73. doi:10.1016/j.ajo.2019.06.008.
  64. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Guidance for Industry. 2009.
  65. Mangione CM, Lee PP, Gutierrez PR, et al. Development of the 25-item National Eye Institute Visual Function Questionnaire. Arch Ophthalmol. 2001;119(7):1050-1058. doi:10.1001/archopht.119.7.1050.
  66. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  67. Rentz AM, Kowalski JW, Walt JG, et al. Development of a preference-based index from the NEI VFQ-25. JAMA Ophthalmol. 2014;132(3):310-318. doi:10.1001/jamaophthalmol.2013.7639.
  68. Macedo AF, Marques S, Pereira da Silva D, Ramos PL, Haas P. BMC Ophthalmol. 2022;22:368. doi:10.1186/s12886-022-02579-7.
  69. Ware JE Jr, Sherbourne CD. Med Care. 1992;30(6):473-483. doi:10.1097/00005650-199206000-00002.
  70. Schiffman RM, Christianson MD, Jacobsen G, Hirsch JD, Reis BL. Arch Ophthalmol. 2000;118(5):615-621. doi:10.1001/archopht.118.5.615.
  71. Chalmers RL, Begley CG, Caffery B. Validation of the 5-Item Dry Eye Questionnaire (DEQ-5). Cont Lens Anterior Eye. 2010;33(2):55-60.
  72. Ngo W, Situ P, Keir N, Korb D, Blackie C, Simpson T. Psychometric properties and validation of the SPEED questionnaire. Cornea. 2013;32(9):1204-1210.
  73. Schaumberg DA, Gulati A, Mathers WD, et al. Development/validation of SANDE. Cornea. 2007;26(3): (VAS development).
  74. Amparo F, Schaumberg DA, Dana R. Comparison of OSDI and SANDE. Ophthalmology. 2015;122(7):1498-1503.
  75. Abetz L, Rajagopalan K, Mertzanis P, et al. Development and validation of IDEEL. Health Qual Life Outcomes. 2011;9:111.
  76. Johnson ME, Murphy PJ. Ocular Comfort Index Rasch validation. Invest Ophthalmol Vis Sci. 2007;48(10):4451-4458.
  77. Sakane Y, Yamaguchi M, Yokoi N, et al. DEQS development/validation. JAMA Ophthalmol. 2013;131(10):1331-1338.
  78. Chalmers RL, Begley CG, Moody K, Hickson-Curran S. CLDEQ-8 validation. Cont Lens Anterior Eye. 2012;35(4):171-178.
  79. Khadka J, Fenwick E, Lamoureux EL, Pesudovs K. Development of the KORQ using Rasch analysis. Invest Ophthalmol Vis Sci. 2012;53: (development study).
  80. McAlinden C, Pesudovs K, Moore JE. Quality of vision questionnaire: development and validation. J Cataract Refract Surg. 2010;36(2): (development study).
  81. Nichols JJ, Mitchell GL, Zadnik K. NEI-RQL-42 development. Arch Ophthalmol. 2003;121: (development).
  82. McAlinden C, Pesudovs K. Psychometric assessment of NEI-RQL-42. Invest Ophthalmol Vis Sci. 2011;52(9): (assessment).
  83. Lundström M, Pesudovs K. J Cataract Refract Surg. 2009;35(3):504-513. doi:10.1016/j.jcrs.2008.11.038.
  84. Steinberg EP, Tielsch JM, Schein OD, et al. Arch Ophthalmol. 1994;112(5):630-638.
  85. Nelson P, Aspinall P, Papasouliotis O, Worton B, O’Brien C. Eye (Lond). 2003;17(4):544-551.
  86. Lee BL, Gutierrez P, Gordon M, et al. Arch Ophthalmol. 1998;116(7):861-866. doi:10.1001/archopht.116.7.861.
  87. Terwee CB, Gerding MN, Dekker FW, Prummel MF, Wiersinga WM. Br J Ophthalmol. 1998;82(7):773-779.
  88. Ponto KA, Hommel G, Pitz S, et al. Short questionnaire to assess QoL in Graves’ orbitopathy: development/validation. Ophthalmology. 2011;118: (validation study).
  89. Hatt SR, Leske DA, Bradley EA, Cole SR, Holmes JM. Ophthalmology. 2009;116(1):139-144. doi:10.1016/j.ophtha.2008.08.050.
  90. Holmes JM, Liebermann L, Hatt SR, Smith SJ, Leske DA. Ophthalmology. 2013;120(7):1492-1496. doi:10.1016/j.ophtha.2012.12.032.
  91. Leske DA, Hatt SR, Yamada T, et al. Am J Ophthalmol. 2019;207:66-73. doi:10.1016/j.ajo.2019.06.008.
  92. Hatt SR, Leske DA, Holmes JM, et al. IXTQ development/validation. Ophthalmology. 2010;117: (development study).
  93. Birch EE, Cheng CS, Felius J. Development of the CVFQ. J AAPOS. 2001;5: (development study).
  94. Tadić V, Cooper A, Cumberland P, Rahi JS. CVAQC development/validation. Ophthalmology. 2013;120: (validation study).
  95. Gothwal VK, Lovie-Kitchin J, Nutheti R. LVP-FVQ development. Invest Ophthalmol Vis Sci. 2003;44(9): (development).
  96. Gothwal VK, et al. LVP-FVQ II validation. Invest Ophthalmol Vis Sci. 2012;53: (validation).
  97. Holmes JM, Strauber S, Quinn GE, et al. Further validation of the ATI. J AAPOS. 2008;12(6):581-584.
  98. Lamoureux EL, Pallant JF, Pesudovs K, et al. Invest Ophthalmol Vis Sci. 2006;47(11):4732-4741.
  99. Fenwick EK, Pesudovs K, Khadka J, et al. Qual Life Res. 2017;26(10):2713-2723. doi:10.1007/s11136-017-1596-9.
  100. Wolffsohn JS, Cochrane AL. Am J Ophthalmol. 2000;130(6):793-802.
  101. Mylona I, Aletras V, Ziakas N, Tsinopoulos I. Acta Medica (Hradec Kralove). 2021;64(2):108-120. doi:10.14712/18059694.2021.19.
  102. Stelmack JA, Szlyk JP, Stelmack TR, et al. Psychometric properties of VA LV VFQ-48. Invest Ophthalmol Vis Sci. 2004;45(11): (validation).
  103. Massof RW, Hsu CT, Baker FH, et al. The Activity Inventory. Arch Ophthalmol. 2005;123: (development); see also TVST. 2021: calibration paper.
  104. Seguí MM, Cabrero-García J, Crespo A, et al. CVS-Q validation. J Clin Epidemiol. 2015;68(6):662-673.
  105. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Guidance for Industry. 2009.
  106. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  107. RAND Health Care. Visual Function Questionnaire (VFQ-25): Manual and instruments. 2000. Accessed September 2, 2025.
  108. Varni JW, Seid M, Kurtin PS. PedsQL™ 4.0: reliability and validity of the Pediatric Quality of Life Inventory version 4.0 generic core scales. Med Care. 2001;39(8):800-812. doi:10.1097/00005650-200108000-00006.
  109. Schiffman RM, Christianson MD, Jacobsen G, Hirsch JD, Reis BL. Reliability and validity of the Ocular Surface Disease Index. Arch Ophthalmol. 2000;118(5):615-621. doi:10.1001/archopht.118.5.615.
  110. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  111. Lundström M, Pesudovs K. Catquest-9SF patient outcomes questionnaire: nine-item short-form Rasch-scaled revision. J Cataract Refract Surg. 2009;35(3):504-513. doi:10.1016/j.jcrs.2008.11.038.
  112. Wild D, Grove A, Martin M, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: report of the ISPOR task force. Value Health. 2005;8(2):94-104. doi:10.1111/j.1524-4733.2005.04054.x.
  113. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of PROMs. Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  114. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome data electronically in clinical trials: the past, present, and promise of ePRO. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  115. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  116. Mokkink LB, Elsman EBM, Terwee CB. COSMIN guideline for systematic reviews of patient-reported outcome measures version 2.0. Qual Life Res. 2024;33(11):2929-2939. doi:10.1007/s11136-024-03761-6.
  117. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  118. Calvert M, King M, Mercieca-Bebber R, et al. SPIRIT-PRO Extension: guideline for inclusion of patient-reported outcomes in clinical trial protocols. BMJ. 2018;362:k969.
  119. Calvert M, Blazeby J, Altman DG, et al. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013;309(8):814-822. doi:10.1001/jama.2013.879.
  120. U.S. Food and Drug Administration. Patient-Focused Drug Development: Selecting, Developing, or Modifying Fit-for-Purpose Clinical Outcome Assessments. Guidance for Industry. 2022.
  121. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome data electronically in clinical trials: the past, present, and promise of ePRO. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  122. Wild D, Grove A, Martin M, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: report of the ISPOR task force. Value Health. 2005;8(2):94-104. doi:10.1111/j.1524-4733.2005.04054.x.
  123. Bressler NM, Su W, Maguire MG, et al. Clinically meaningful change estimates for the NEI VFQ-25 in patients with diabetic macular edema. Transl Vis Sci Technol. 2024;13(12):27. doi:10.1167/tvst.13.12.27.
  124. Suñer IJ, Kokame GT, Yu E, Ward J, Dolan C, Saperstein DA. Responsiveness of NEI VFQ-25 to changes in visual acuity in neovascular AMD: validation studies from two phase 3 clinical trials. Invest Ophthalmol Vis Sci. 2009;50(8):3629-3635. doi:10.1167/iovs.08-2910.
  125. Suñer IJ, Kokame GT, Yu E, Ward J, Dolan C, Saperstein DA. Responsiveness of NEI VFQ-25 to changes in visual acuity in neovascular AMD: validation studies from two phase 3 clinical trials. Invest Ophthalmol Vis Sci. 2009;50(8):3629-3635. doi:10.1167/iovs.08-2910.
  126. Bressler NM, Su W, Maguire MG, et al. Clinically meaningful change estimates for the NEI VFQ-25 in patients with diabetic macular edema. Transl Vis Sci Technol. 2024;13(12):27. doi:10.1167/tvst.13.12.27.
  127. Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22(8):1889-1905. doi:10.1007/s11136-012-0344-y.
  128. Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures (PROMs). Qual Life Res. 2018;27(5):1147-1157. doi:10.1007/s11136-018-1798-3.
  129. Goldstein JE, Bradley C, Gross AL, Jackson ML, Bressler NM, Massof RW. The NEI VFQ-25C: calibrating items in the National Eye Institute Visual Function Questionnaire-25 to enable comparison of outcome measures. Transl Vis Sci Technol. 2022;11(5):10. doi:10.1167/tvst.11.5.10.
  130. Wild D, Grove A, Martin M, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: report of the ISPOR task force. Value Health. 2005;8(2):94-104. doi:10.1111/j.1524-4733.2005.04054.x.
  131. Coons SJ, Gwaltney CJ, Hays RD, et al. Capturing patient-reported outcome data electronically in clinical trials: the past, present, and promise of ePRO. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  132. U.S. Food and Drug Administration. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Guidance for Industry. 2009.
  133. Braithwaite T, Calvert M, Gray A, Pesudovs K, Denniston AK. The use of patient-reported outcome research in modern ophthalmology: impact on clinical trials and routine clinical practice. Patient Relat Outcome Meas. 2019;10:9-24. doi:10.2147/PROM.S162802.
  134. Coons SJ, Gwaltney CJ, Hays RD, et al. Patient. 2015;8(4):301-309. doi:10.1007/s40271-014-0090-z.
  135. Macedo AF, Marques S, Pereira da Silva D, Ramos PL, Haas P. Predictors of problems reported on the EQ-5D-3L dimensions for patients with ophthalmic disease. BMC Ophthalmol. 2022;22:368. doi:10.1186/s12886-022-02579-7.
  136. Braithwaite T, Calvert M, Gray A, Pesudovs K, Denniston AK. Patient Relat Outcome Meas. 2019;10:9-24. doi:10.2147/PROM.S162802.
The Academy uses cookies to analyze performance and provide relevant personalized content to users of our website.