• Dictionaries home
  • American English
  • Collocations
  • German-English
  • Grammar home
  • Practical English Usage
  • Learn & Practise Grammar (Beta)
  • Word Lists home
  • My Word Lists
  • Recent additions
  • Resources home
  • Text Checker

Definition of Cantab abbreviation from the Oxford Advanced Learner's Dictionary

  • James Cox MA ( Cantab)

Questions about grammar and vocabulary?

Find the answers with Practical English Usage online, your indispensable guide to problems in English.

Nearby words

Words and phrases

Personal account.

  • Access or purchase personal subscriptions
  • Get our newsletter
  • Save searches
  • Set display preferences

Institutional access

Sign in with library card

Sign in with username / password

Recommend to your librarian

Institutional account management

Sign in as administrator on Oxford Academic

Cantab noun & adjective

  • Show all quotations

What does the word Cantab mean?

There are two meanings listed in OED's entry for the word Cantab . See ‘Meaning & use’ for definitions, usage, and quotation evidence.

Entry status

OED is undergoing a continuous programme of revision to modernize and improve definitions. This entry has not yet been fully revised.

How common is the word Cantab ?

How is the word cantab pronounced, british english, where does the word cantab come from.

Earliest known use

The earliest known use of the word Cantab is in the mid 1700s.

OED's earliest evidence for Cantab is from 1751, in the writing of Francis Coventry, author.

Cantab is of multiple origins. Partly (i) formed within English, by clipping or shortening. Partly (ii) a borrowing from Latin.

Etymons: Cantabrigian adj. & n. ; Latin Cantab.

Nearby entries

  • cant, n.³ ?1553–
  • cant, n.⁴ 1705–
  • cant, n.⁵ 1790–
  • cant, adj. 1330–
  • cant, v.¹ c1440–1863
  • cant, v.² 1542–
  • cant, v.³ 1567–
  • cant, v.⁴ 1720–
  • cant, v.⁵ 1699
  • cant, v.⁶ 1580–
  • Cantab, n. & adj. 1751–
  • cantabank, n. 1834–
  • cantabile, adj. & n. 1724–
  • Cantabrian, adj. & n. 1642–
  • Cantabrigian, adj. & n. c1540–
  • cantaillie, n. 1561
  • Cantal, n. 1890–
  • cantalite, n.
  • cantaloon, n. 1711–38
  • cantaloupe, n. 1739–
  • cantanker, n. 1825

Thank you for visiting Oxford English Dictionary

To continue reading, please sign in below or purchase a subscription

Meaning & use

Pronunciation, compounds & derived words, entry history for cantab, n. & adj..

Cantab, n. & adj. was first published in 1888; not yet revised

Cantab, n. & adj. was last modified in July 2023

Revision of the OED is a long-term project. Entries in oed.com which have not been revised may include:

  • corrections and revisions to definitions, pronunciation, etymology, headwords, variant spellings, quotations, and dates;
  • new senses, phrases, and quotations which have been added in subsequent print and online updates.

Revisions and additions of this kind were last incorporated into Cantab, n. & adj. in July 2023.

Earlier versions of Cantab, n. & adj. were published in:

OED First Edition (1888)

  • Find out more

OED Second Edition (1989)

  • View Cantab, n. & adj. in Second Edition

Please submit your feedback for Cantab, n. & adj.

Please include your email address if you are happy to be contacted about your feedback. OUP will not use this email address for any other purpose.

Citation details

Factsheet for cantab, n. & adj., browse entry.

University of Cambridge

Study at Cambridge

About the university, research at cambridge.

  • Undergraduate courses
  • Events and open days
  • Fees and finance
  • Postgraduate courses
  • How to apply
  • Postgraduate events
  • Fees and funding
  • International students
  • Continuing education
  • Executive and professional education
  • Courses in education
  • How the University and Colleges work
  • Term dates and calendars
  • Visiting the University
  • Annual reports
  • Equality and diversity
  • A global university
  • Public engagement
  • Give to Cambridge
  • For Cambridge students
  • For our researchers
  • Business and enterprise
  • Colleges & departments
  • Email & phone search
  • Museums & collections
  • Your course
  • Graduation and what next?
  • Cambridge students
  • New students overview
  • Pre-arrival courses
  • Student registration overview
  • Frequently Asked Questions overview
  • Who needs to register
  • When to register
  • Received registration in error/not received registration email
  • Problems creating an account
  • Problems logging in
  • Problems with screen display
  • Personal details changed/incorrectly displayed
  • Course details changed/incorrectly displayed
  • Accessing email and other services
  • Miscellaneous questions
  • Contact Form
  • First few weeks
  • Manage your student information overview
  • Student record overview
  • Camsis overview
  • Extended Self-Service (ESS)
  • Logging into CamSIS
  • What CamSIS can do for you
  • Personal information overview
  • Changing your name
  • Changing Colleges
  • Residing outside the University's precincts
  • Applying for person(s) to join you in Cambridge
  • Postgraduate students overview
  • Code of Practice for Master's students
  • Code of Practice for Research Students
  • Postgraduate student information
  • Requirements for research degrees
  • Terms of study
  • Your progress
  • Rules and legal compliance overview
  • Freedom of speech
  • Public gatherings
  • Disclosure and barring service overview
  • Cambridge life overview
  • Student unions
  • Extra-curricular activities overview
  • Registering societies
  • Military, air, and sea training
  • Food and accommodation
  • Transport overview
  • Bicycles and boats
  • Your course overview
  • Undergraduate study
  • Postgraduate study overview
  • Changes to your student status (postgraduates only) overview
  • Applying for a change in your student status (postgraduates only)
  • Changing your mode of study
  • Withdrawing from the University
  • Allowance/exemption of research terms
  • Withdrawal from Study
  • Reinstatement
  • Changing your course registration
  • Changing your department/faculty
  • Changing your supervisor
  • Exemption from the University composition fee
  • Confirmation of Study: Academic Verification Letters
  • Extending your submission date
  • Medical intermission (postgraduates)
  • Non-medical intermission (postgraduates)
  • Returning from medical intermission
  • Working away
  • Working while you study
  • Postgraduate by Research Exam Information
  • Research passports
  • Engagement and feedback
  • Student elections
  • Graduation and what next? overview
  • Degree Ceremonies overview
  • The ceremony
  • Academical dress
  • Photography
  • Degree ceremony dates
  • Eligibility
  • The Cambridge MA overview
  • Degrees Under Statute B II 2
  • Degree certificates and transcripts overview
  • Academic Transcripts
  • Degree Certificates
  • After Graduation
  • Verification of Cambridge degrees
  • After your examination
  • Exams overview
  • Undergraduate and Postgraduate Taught overview
  • All students timetable
  • Undergraduate exam information overview
  • Postgraduate examinations overview
  • Examination access arrangements overview
  • Research programmes
  • Taught programmes
  • Writing, submitting and examination overview
  • PhD, EdD, MSc, MLitt overview
  • Research Best Practice
  • Preparing to submit your thesis
  • Submitting your thesis
  • Word limits
  • The oral examination (viva)
  • After the viva (oral examination)
  • After the examination overview
  • Degree approval and conferment overview
  • Final thesis submission
  • Examination allowances for certain Postgraduate degrees (except PhD, MSc, MLitt and MPhil by thesis degrees)
  • Requesting a review of the results of an examination (postgraduate qualifications)
  • Higher degrees overview
  • Higher doctorates
  • Bachelor of divinity
  • PhD under Special Regulations
  • Faith-provision in University exams
  • Publication of Results
  • Exam Support
  • Postgraduate by Research
  • Resources overview
  • Build your skills
  • Research students
  • Fees and financial assistance overview
  • Financial assistance overview
  • Overview of Financial Assistance
  • General eligibility principles and guidance
  • Cambridge Bursary Scheme funding overview
  • What you could get
  • Scottish students
  • EU students
  • Clinical medics and vets
  • Independent students
  • Extra scholarships and awards
  • Undergraduate Financial Assistance Fund
  • Postgraduate Financial Assistance Fund
  • Realise Financial Assistance Fund
  • The Crane's Charity
  • Loan Fund I
  • External Support 
  • Support from your Funding Sponsor
  • Guidance for Academic Supervisors and College Tutors
  • Fees overview
  • Funding overview
  • Mosley, Worts, and Frere Travel Funds
  • Support for UKRI Studentship Holders overview
  • Student loans overview
  • US loans overview
  • Application procedure
  • Entrance and Exit Counselling
  • Cost of attendance
  • What type of loan and how much you can borrow
  • Interest rates for federal student loans
  • Proof of funding for visa purposes
  • Disbursement
  • Satisfactory academic progress policy
  • In-School Deferment Forms
  • Leave of absence
  • Withdrawing and return to Title IV policy
  • Rights and Responsibilities as a Borrower
  • Managing Repayment
  • Consumer information
  • Submitting a thesis — information for PhD students
  • Private loans
  • Veteran affairs benefits
  • Frequently Asked Questions
  • Student support
  • Complaints and appeals

The Cambridge MA

  • Degree Ceremonies
  • Degree certificates and transcripts

In most UK universities, the Master of Arts is a degree awarded by examination. At Cambridge, the MA is conferred by right on holders of the BA degree of the University and on certain other senior members and is not available as a postgraduate qualification.

Possession of the MA, or any other Cambridge masters degree or doctorate, confers membership of the University Senate. This gives the right to:

  • participate in Discussions (part of the University's decision-making process)
  • vote in the election of a new Chancellor or High Steward
  • borrow books from the University Library

Many colleges also offer their senior members the opportunity to dine at High Table on a certain number of occasions each year.

This method of conferment of the MA also exists at the universities of Oxford and Dublin.

Getting your MA

A Bachelor of Arts may be admitted to Master of Arts not less than six years from the end of his or her first term of residence, provided that a supplicat for the latter degree may only be entered after the former degree has been conferred.

This removes the previous requirement for the B.A. Degree to be held for two years before the M.A. can be conferred.

Someone who qualified for the BA but has never had it conferred cannot be entered for the MA, even if the necessary time has passed since the end of their first term of residence, unless and until the BA has been conferred on them.  You cannot have the BA and MA conferred during the same ceremony.

  • Further details are available from your college , which will usually inform you when you become eligible.

The MA may also be conferred, under Statute B.II.2, on Heads and Fellows of Colleges and on University Officers who are not Cambridge graduates after (except in the case of Heads and Professors) three years in post.

  • Further details are available here.

Degrees by Incorporation

Graduates of the Universities of Oxford or Dublin who hold a post in the University of Cambridge or in a Cambridge College have the privilege of being admitted to an equivalent Cambridge degree 'by incorporation' under Statute B.II.2. Eligibility is similar to that for the MA under statute B.II.2

© 2024 University of Cambridge

  • Contact the University
  • Accessibility
  • Freedom of information
  • Privacy policy and cookies
  • Statement on Modern Slavery
  • Terms and conditions
  • University A-Z
  • Undergraduate
  • Postgraduate
  • Research news
  • About research at Cambridge
  • Spotlight on...

Cambridge Cognition

Search for: Search Button

what does phd cantab mean

  • Digital cognitive assessments

CANTAB® assessments provide scientifically validated, highly sensitive, precise and objective measures of cognitive function, correlated to neural networks.

Keep your participants engaged with our simple and user-friendly tasks, whether in-clinic or at home. Our neuroscientists and consultants work with you to build a battery of CANTAB cognitive assessments that will best enable you to reach your study objectives.

what does phd cantab mean

Language independent, facilitating consistent cross-cultural research

what does phd cantab mean

More targeted to measure specific cognitive domains than pen and paper assessments, our digital cognitive assessments produce more data points and reduce noise, making them highly sensitive.

what does phd cantab mean

High sensitivity to pharmacological and environmental effects

what does phd cantab mean

Translational utility enables comparisons with preclinical findings

what does phd cantab mean

Reduce human error and variability with automated data capture and scoring

what does phd cantab mean

Automated test delivery removes variance for consistently high quality data

what does phd cantab mean

Validated to measure specific cognitive domains

Our cognitive function specific tests are calibrated to provide highly accurate data in-clinic on iPads or remotely using web-based testing.

  • Attention and psychomotor speed
  • Executive function
  • Emotion and social cognition

Test across larger geographical areas with web-based testing

what does phd cantab mean

Data quality maintained with task adherence checks

what does phd cantab mean

Save time and make testing more convenient for participants and study teams

what does phd cantab mean

Clear on-screen and voice-over instructions with an easy-to-use interface

* Please note, this is a demo assessment only. No report will be generated. 

A comprehensive solution

what does phd cantab mean

Cost and time efficient

what does phd cantab mean

Talk to us how CANTAB assessments can support your study

Cambridge Cognition

  • For drug development
  • Study design
  • Patient experience
  • Study management & support
  • Data management & reporting
  • For academic research
  • Technology & delivery
  • For healthcare
  • Primary & secondary care
  • Electronic questionnaires & scales
  • Voice analysis
  • NeuroVocalix™ | Winterlight
  • High frequency testing
  • Cognition Kit™
  • Decentralised clinical trials
  • Ratings quality assurance
  • By therapeutic area
  • Alzheimer’s disease
  • Autism spectrum disorder
  • Cognitive safety
  • Core cognitive function
  • Down’s syndrome
  • Drug abuse liability
  • Huntington’s disease
  • Multiple sclerosis
  • Neuromuscular disorders
  • Obsessive compulsive disorder
  • Parkinson’s disease
  • Schizophrenia
  • Stroke & cerebrovascular disease
  • Traumatic brain injury
  • By cognitive function
  • Attention & psychomotor speed
  • Emotion & social cognition
  • Bibliography
  • Publications & posters
  • eBooks & whitepapers
  • Regulatory compliance
  • Investor centre
  • Share on twitter
  • Share on facebook

Cantab goes part-time

  • Share on linkedin
  • Share on mail

You are in mid-career and thought that the chance of doing postgraduate work at university, let alone Oxbridge, had long passed?

Think again. Cambridge University has established a new postgraduate degree designed for those in work. It is called the Master of Studies, and is Cambridge's first major step into part-time degrees.

Cambridge established a part-time Master of Education in 1992, but this caters for a limited group. The new degree, co-ordinated by the board of continuing studies at Madingley Hall, will be offered by several faculties. But an initial limit of 200 students has been set to ensure that the colleges do not become swamped by part-timers.

The first MSt (Cantab.) graduates are likely come from an existing diploma in interdisciplinary design for the built environment, run jointly by the architecture and engineering departments. There are 16 students on the two-year course, and when the new Cambridge degree receives formal Privy Council approval next year, their diplomas will probably be up- graded to a postgraduate degree.

According to Michael Richardson, director of the board of continuing studies, there are already plans to establish courses in modern English literature and local regional history. He said the new degree was the product of "a widening perception of the need for part-time postgraduate study" and evidence that Cambridge "recognises that people need to update themselves in mid-career".

Henry Easterling, of the registry office, said most students will be from East Anglia. Entry qualifications will be a good first degree plus vocational experience.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter

Or subscribe for unlimited access to:

  • Unlimited access to news, views, insights & reviews
  • Digital editions
  • Digital access to THE’s university and college rankings analysis

Already registered or a current subscriber? Login

Featured jobs

what does phd cantab mean

logo

Does Cantab mean Cambridge?

geographic-faq

As a geologist, I often find myself fascinated by the connections between language and geography. One such connection that has caught my attention is the use of the term “Cantab” to refer to Cambridge. Cantabrigian, or simply Cantab, is an adjective that has two distinct meanings. First, it is used to describe something that is of or related to Cambridge University in the city of Cambridge, England. Second, it can also refer to something that is of or related to the cities of Cambridge in both England and Massachusetts, United States.

The term “Cantab” is often used in the context of educational qualifications. For example, when someone has a degree from Cambridge University, they may include the abbreviation “Cantab” after their name and qualifications. This indicates that they have a degree from Cambridge, such as a Bachelor of Arts (B.A.) or any other degree offered by the university.

Interestingly, the term “Cantab” is not only associated with Cambridge in England but also with Cambridge, Massachusetts, home to renowned educational institutions such as Harvard University and the Massachusetts Institute of Technology (MIT). In colloquial usage, a “Cantab” can refer to a graduate of Cambridge University in England or a student or graduate of Harvard University in Cambridge, Massachusetts.

One might wonder if there is an equivalent term for Oxford, another prestigious university in the United Kingdom. In the Oxford University Gazette and University Calendar, the term “Oxf” is used instead of “Oxon” to refer to someone associated with Oxford University. This change in terminology was implemented in 2007 to align with the style used for other universities. Similarly, “Dub” is used for the University of Dublin (Trinity College Dublin).

In the context of Cambridge, the term “M.A. Cantab” refers to an honorary degree awarded to those who already hold a B.A. degree from Cambridge. This degree is conferred by right on holders of a Cambridge B.A. degree and certain other senior members. Possessing the M.A. or any other master’s degree or doctorate from Cambridge also grants membership to the University Senate.

The origin of the term “Cantab” can be traced back to the medieval Latin name for Cambridge, Cantabrigia. The name was derived from the Anglo-Saxon name Cantebrigge. In Cambridge, Massachusetts, the name “Cantabrigia” appears in the city seal and is abbreviated as “Cantab” in the seal of the Episcopal Divinity School located there.

It is worth noting that neither Oxford nor Cambridge offers the Master of Arts (M.A.) as a standalone degree. The M.A. is only awarded as an honorary qualification to former undergraduate students. This differs from many other universities globally that offer the M.A. as a common postgraduate qualification.

When it comes to comparing the prestige of Oxford and Cambridge, opinions vary. Different university rankings provide different assessments. For example, The Times University Rankings for 2023 place Oxford at the top spot, while Cambridge ranks third. The QS World University Rankings 2023, on the other hand, place Cambridge in second place and Oxford in fourth. Ultimately, both universities have a long-standing reputation for academic excellence.

Now, let me address some frequently asked questions about Oxford and Cambridge:

1. Is it harder to get into Oxford or Cambridge? By the time you are sitting for an interview, your chances of success at Oxford are around 1 in 3, while at Cambridge, they are around 1 in 4. However, it’s important to note that interview questions cannot be predicted or prepared in advance.

2. Which is prettier, Oxford or Cambridge? Opinions differ on this matter. Generally, Cambridge is considered a little prettier, while Oxford has a bit more going on in terms of activities and attractions.

3. Why can’t you apply to both Oxford and Cambridge? Both universities receive a large volume of applications each year. Allowing applicants to apply to both institutions would create an even larger number of candidates to assess for a limited number of places.

4. Is Oxford or Harvard better? Harvard is ranked second in National Universities and first in Global Universities by U.S. News, while Oxford is ranked first in Best Global Universities in Europe and fifth by U.S. News. Rankings may vary depending on the source.

5. What is the most elite college at Oxford? All Souls College is considered the most exclusive Oxford College. It does not admit undergraduate students, and to gain admission, graduate and postgraduate students must pass a rigorous examination.

6. Do you automatically get a master’s at Oxford? Obtaining a master’s degree at Oxford is not automatic. If you have completed a Bachelor of Arts (BA) or Bachelor of Fine Arts (BFA) degree, you become eligible to apply for an MA after 21 terms (seven years) since you matriculated.

7. Why do Oxford degrees get upgraded to an MA? The Oxford MA is not an upgrade of the BA or a postgraduate degree. Instead, it is a historic tradition that marks seniority within the University.

8. Is postgraduate study at Cambridge difficult? Postgraduate study at the University of Cambridge is intense and intellectually demanding. The university sets high academic entry requirements to ensure the quality and rigor of its programs.

As a geologist, exploring the linguistic nuances and connections between colleges, universities, and cities like Cambridge and Oxford adds an extra layer of intrigue to my research. The use of terms like “Cantab” to refer to Cambridge reveals the historical and geographical significance of these esteemed educational institutions.

About The Author

Hubert wolf, leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • J Med Internet Res
  • v.22(8); 2020 Aug

Logo of jmir

Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study

1 Cambridge Cognition Ltd, Cambridge, United Kingdom

Caroline Skirrow

2 School of Psychological Science, University of Bristol, Bristol, United Kingdom

Pasquale Dente

Jennifer h barnett.

3 Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom

Francesca K Cormack

Associated data.

Comparison of bivariate (Spearman) correlation of test performance between settings, with partial correlations which covary for elapsed time (in days) between assessments and test retest reliabilities of relevant CANTAB performance indices from previously published research.

Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance.

This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments.

A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement.

Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement.

Conclusions

Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.

Introduction

Cognitive function is typically assessed during one-to-one administration of a neuropsychological test in a clinic or lab setting by a trained psychometrician [ 1 ]. However, in-person assessments entail significant costs, requiring employed and trained staff, as well as time and travel costs for personnel and participants [ 2 ]. These costs may limit their application and reduce resources for clinical and research activities, including patient care, optimizing power for research, and screening for clinical trials [ 3 ]. The requirement for one-to-one test administration may also limit participation to people who are willing and able to travel, making some communities underrepresented in clinical research (eg, individuals who are geographically isolated, nondrivers, physically disabled, and those suffering from agoraphobia or social phobias).

Computerized testing platforms and widespread access to fast and affordable internet has the potential to bring neuropsychological assessment into people’s homes [ 2 - 4 ]. Web-based neuropsychological assessments could help to meet increasing demands in clinical and cohort studies [ 3 , 5 ]: providing access to large samples, allowing fine-grained phenotyping of complex clinical conditions, facilitating access to patients and participants in remote areas or those with mobility problems, enhancing coordination of data collection across multiple sites, assisting in monitoring of patients with chronic or progressive neurological diseases, and enabling cost-effective screening for clinical trials.

Web-based automated assessments are inexpensive, are quick to conduct, and provide fewer restrictions on timing and location [ 2 , 5 - 7 ]. Evidence suggests that broadly targeted web-based assessments allow the recruitment of samples that are reasonably representative in terms of personality and adjustment characteristics and are more diverse than traditionally recruited samples in terms of geographical location, gender, and socioeconomic status [ 7 ]. Moreover, web-based assessments can reduce the cost of recruiting specialized samples or special interest groups [ 4 , 7 ].

However, the joint position paper for the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology [ 8 ] highlights the necessity of viewing unsupervised computer-based tests as new and different from those that are examiner administered, with adaptations of existing tests requiring equivalency or new normative data. Key differences between examiner-led and unsupervised computerized testing relate to 3 primary factors, which are likely to interact with task-specific characteristics (such as simplicity of the user interface, audibility and clarity of stimuli and instructions, type of response required, and how engaging and how difficult a task is) to influence task performance:

  • Examiner contact: Social demands created by the presence of an examiner may affect performance [ 9 ]; examiner contact allows for behavioral observations to assess comprehension, mental state and competency, motivation, and task engagement [ 8 , 10 ]; the examiner can also provide additional explanation regarding tasks where needed [ 11 ], and structured encouragement to support participant motivation.
  • Testing environment: While the testing environment can be kept constant in the laboratory, it is uncontrolled elsewhere [ 8 , 10 ]. There is little control over the location, timing, and likelihood of participant distraction in unsupervised testing.
  • Workstation: Differences in the performance of computer hardware, software, processing speed, and internet speed, as well as response input method (touch screen versus key stroke or mouse click), are likely to impact test measures, particularly those relating to response timing [ 12 ].

Despite the key differences outlined earlier, web-based assessments have proven to be powerful for identifying age-related changes in cognitive processes [ 13 ], thus providing reliable data for a longitudinal and quantitative genetic analysis [ 2 , 14 ]. Previous reports have usually shown moderate correlations between web-based cognitive assessments and paper-and-pencil test variants [ 1 , 15 ], and moderate-to-high correlations between parallel computerized test versions assessing a broad range of cognitive domains administered in the lab and at home, or in supervised and nonsupervised settings [ 16 - 19 ]. This suggests that web-based cognitive assessment may be considered a viable alternative to in-person assessment.

Here, we examine the comparability of unsupervised web-based tests completed at home against in-person lab-based assessment in selected tests from the Cambridge Neuropsychological Test Automated Battery (CANTAB). CANTAB is a widely used computerized assessment battery [ 20 ], published in over 2000 peer review papers [ 21 ], and is widely used in academic, clinical, and pharmacological research [ 22 ]. CANTAB tests include a suite of 19 cognitive assessments measuring aspects of cognitive functioning in different therapeutic areas, including attention and psychomotor speed, executive function, memory, and emotion and social cognition. Tasks can be used individually or as a battery to measure different aspects of cognitive function. CANTAB is usually administered under controlled settings in the presence of a trained researcher or clinician.

This study aimed to determine the comparability of unsupervised web-based assessment on CANTAB against a standard in-person assessment in a healthy adult population. The aim was to examine the consistency of assessment outcomes across these 2 settings, and by extension to inform whether web-based testing could be used as an alternative or as a complementary assessment method producing similar results. We selected 7 tests from CANTAB, which correspond to those most frequently used in academic and clinical research in the cognitive domain of interest.

For web-based testing to show acceptable comparability, we required assessments to (1) show high levels of intersetting reliability, that is, the reproducibility of measures across settings [ 23 ], (2) show equivalence with in-person tests, and (3) meet established thresholds for agreement. Given the results from previous research comparing online and in-person tests reviewed earlier, we expected test performance indices to show acceptable comparability. However, we expected reaction time measures to perform more poorly due to the variance introduced by computing software, hardware, and response method.

Power Analysis

This study was powered to detect moderate-to-high intraclass correlations (ICCs) and moderate-to-large differences in test performance between test settings.

Power calculations to detect ICCs indicating adequate reliability were completed using the R package ICC.Sample.Size [ 24 , 25 ], a statistical package based on the work of Zou et al [ 26 ]. Using thresholds for clinical significance developed by Cicchetti [ 27 ], the following interpretations were adopted for ICC coefficients (ρ): <0.4, poor reliability; 0.40-0.59, fair; 0.60-0.74, good; 0.75-1.00, excellent. This indicated that a sample of 18 was required to detect an ICC that is indicative of good reliability (ρ=0.60) at 80% power, with a two-tailed α of .05. A sample of 45 would provide adequate power to detect an ICC that is indicative of fair reliability (ρ=0.40).

The power to detect differences between testing platforms was examined using the program G*power 3 [ 28 ]. This indicated that detecting an effect size of 0.4, at 80% power (two-tailed α at .05), would require a sample of 52 in a paired sample test with normal distribution, and between 35 and 47 for the nonparametric equivalent, depending on the underlying distribution of data (laplace and logistic, respectively). An effect size of 0.4 has been reported as relatively typical within psychological sciences [ 29 , 30 ]. This study utilizes the Bayesian approach as an adjunct to our frequentist analysis to consider the strength of evidence in favor of both the alternative and null hypotheses and compare their probabilities [ 31 ].

Participants

Participants were approached via fliers and advertisements posted on Facebook, targeting Cambridge, United Kingdom, and the immediate surrounding areas. These directed potential participants to a web-based screening questionnaire, administered via SurveyMonkey [ 32 ], through which participants provided basic demographic data (sex, age, and education level) and responses to questions probing eligibility for the study (exclusion criteria: history of dyslexia, concussion, head injury, neurological or psychiatric conditions, and nonfluent in English).

A total of 51 healthy adults were recruited into this study (32 women and 19 men), aged between 20 and 77 years, with a mean age of 36.8 (SD 15.6) years. Participants were highly educated, with 17.6% with school-level qualifications and 82.4% with university-level education, reflecting the demography of this region. All participants provided informed written consent to participate.

Participants were allocated to one of the two groups (in-person first or web-based first), through randomization at the time of recruitment. However, where necessary, allocation from randomization was overridden, where participant availability or laboratory space constricted the timing of assessments. The allocation of test sessions was as follows: in-person testing first for 33 participants and web-based assessment first for 18 participants. Test sessions were completed on average 1 week apart (mean 7.24, SD 4.5 days, range 1-25 days, with the majority [82% of tests] between 3 and 9 days), again with variation due to participant and laboratory availability.

In-person assessments were completed at Cambridge University, Cambridge, United Kingdom. Participants were seated in a quiet room and presented with CANTAB loaded onto an iPad (iPad 9.7, IOS operating system, [ 33 ]). The CANTAB test administration is fully automated, with on-screen text instructions and additional voiceover guidance for each task, explaining task goals and response requirements. For tests requiring training in addition to instruction (see Measures ), training trials are incorporated within the automatic test administration. The transition from training to tests proceeds automatically, as do transitions between tests. Responses were logged via the touch screen. A trained psychometrician was present, whose role was to provide technical support where needed or additional instructions where required as well as to log observations (distraction or problems) during task performance.

Web-based assessments were completed via the CANTAB Connect web-based testing feature [ 34 ]. This delivered assessments which, from the viewpoint of the participant, were identical to those administered in-person, with the exception that they were administered at home and on personal computing systems. Web-based testing was enabled only on desktop or laptop computers, and not on touch screen devices. Responses were logged using mouse or trackpad clicks. Identical to in-person assessments, test administration was automated, with on-screen text instructions and additional voiceover guidance for each task, training incorporated into tasks where required, and automatic transitions between tests. Web-based CANTAB tests are designed to be resistant to low bandwidth by preloading or caching of data, allowing tests to be run in offline mode in testing locations where internet connectivity is poor. The application code is designed for cross-browser support and uses ubiquitous HTML and JavaScript features to support commonly used platforms. Extensive automated and manual tests are carried out to test functionality across browsers and ensure that the tests operate correctly and record accurate data.

Distraction during web-based assessment was documented with inbuilt programming to log if tasks were completed in full-screen mode, or if the participant tabbed to another browser window during the task. Participants were also asked at the end of the testing session if they were distracted during testing, although the nature of the distraction was not queried. These different forms of distraction were logged, but not differentiated, in the study database during data collection.

A total of 7 CANTAB tests ( Figure 1 ) were administered. Cognitive outcome measures include performance indices (eg, number of trials solved, number of errors, response sensitivity) and reaction time (response times). For both in-person and web-based assessments, tests were administered in the following order:

An external file that holds a picture, illustration, etc.
Object name is jmir_v22i8e16792_fig1.jpg

Screenshots of Cambridge Neuropsychological Test Automated Battery tests administered: (A) Paired Associate Learning, (B) One Touch Stockings of Cambridge, (C) Pattern Recognition Memory, (D) Spatial Working Memory, (E) Emotion Recognition Task, and (F) Rapid Visual Information Processing.

  • Paired associate learning (PAL) [ 22 ] is an 8-min test of visual episodic memory. The screen displays a number of boxes and shows the interior of each box in randomized order to briefly reveal patterns in some boxes. Patterns are then displayed in the middle of the device screen one at a time, and the participant must identify the box in which each pattern was originally located. If an error is made, boxes are opened in sequence again to remind participants of the pattern locations. The test begins with a practice trial, which includes 6 boxes in which there are 2 patterns. Once the practice trial is successfully completed, the test begins. The task increased in difficulty after each successfully completed stage, with trials including 2, 4, and 6 different patterns in 6 boxes, and finally 8 different patterns in 8 boxes. The task discontinues when a participant fails to locate all patterns after 4 attempts on the same trial. Key outcome measures included PAL Total Errors Adjusted, the total number of errors adjusted for the stages not completed due to early discontinuation, and PAL First Attempt Memory Score, the number of times a participant chooses the correct box on their first attempt across each stage.
  • One touch stockings of Cambridge (OTS) [ 35 ] is a 10-min test of executive function, assessing spatial planning and working memory, and based on the Tower of London test. The screen shows 2 displays, each containing 3 colored balls that look like stacks held in stockings or socks suspended from a beam. The target configuration is shown at the top of the screen and the starting arrangement below. The subject must determine the number of moves required to match the starting configuration to the target. One move consists of taking 1 ball from its current location and placing it in a stocking that has free space. Only the top ball in any stocking may be moved (the balls below are inaccessible until any balls above have been moved), and a ball placed in a stocking drops to the lowest free space available. Participants must solve each problem without moving the balls, by indicating the number of moves required by selecting a numbered box at the bottom of the screen. The task begins with 3 training trials. The first two show how the balls would be moved before participants select their response, and the third only shows the solution when the participants’ response is incorrect. Once training is completed, the task then progresses with increasing difficulty. Key outcomes included problems solved on first choice and median latency to correct response.
  • Pattern recognition memory immediate (PRM-I) [ 36 ] is a 3-min test of immediate visual pattern recognition. A series of 18 simple but abstract stimulus patterns are shown in the center of the screen for 3000 ms each. The screen then displays pairs of patterns, one novel pattern and one that was shown previously. The participants have to select patterns that they recognize from the presentation phase. Participants receive performance feedback in the form of a tick or cross after every response. Key outcome variables include the percentage of correct responses and median latency of correct responses.
  • Spatial working memory (SWM) [ 35 ] is a 4-min test of retention and manipulation of visuospatial information. Participants click on colored boxes presented on the screen to inspect their contents and reveal a token hidden below. They then move these tokens to a collection area on the right-hand side of the screen. The key task instruction is that tokens will not be located in the same box twice during each trial. Outcome measures include SWM Between Errors: the number of times the participant incorrectly revisits a box, calculated across all assessed 4, 6, and 8 token trials; and SWM Strategy: the number of unique boxes from which a participant starts a new search in the 6 and 8 box trials. More efficient searches are carried out by searching boxes in a fixed order [ 37 ]. The task discontinues after 20 failed inspections during 4-token trials, 30 failed inspections for 6-token trials, and 40 failed inspections for 8-token trials.
  • The emotion recognition task (ERT) [ 38 ] is a 7-min test measuring participants’ ability to identify 6 basic facial emotion expressions along a continuum of expression magnitude. Participants fixate on a white “+” cross in the center of the screen for 1500 to 2500 ms, after which a face stimulus is displayed for 200 ms followed by a stimulus mask image for 250 ms. Participants then choose the most appropriate emotion from a list of 6 options (sadness, happiness, fear, anger, disgust, or surprise). Outcome measures included the total number of hits and median latency to correct responses.
  • Pattern recognition memory delayed (PRM-D) is a 2-min test of delayed visual pattern recognition. Patterns displayed for PRM-I are revisited and recognition is probed in the same manner as described in (3) after delay. In this study, the delay between PRM-I and PRM-D was approximately 12 min. Key outcome variables include the percentage of correct responses and median latency of correct responses.
  • Rapid visual information processing [ 39 ] (RVP) is a test of sustained attention lasting 7 min. Digits from 2 to 9 are presented successively at the rate of 100 digits per minute and in a pseudorandom order. Participants are asked to respond to target sequences of digits (eg, 3-5-7, 2-4-6, 4-6-8) as quickly as possible by clicking or pressing a button at the center of the device screen. The level of difficulty varies with either 1- or 3-target sequences that the participant must watch for at the same time. Outcome measures included a signal detection measure of response sensitivity to the target, regardless of response tendency (RVP A’: expected range is 0-1) and the median response latency.

CANTAB test structures are identical for each administration, across both in-person and web-based assessments. However, for most CANTAB tests (OTS, PAL, RVP, PRM, and ERT), stimuli are allocated at random from a broader stimulus pool during each assessment, making it unlikely that participants complete the same problems more than once. For the SWM test, token locations are not fixed but instead programmed to respond to participants’ performance and selection strategy, reducing the risk of participants being able to learn the location of tokens from one assessment to the next. These adaptive features aim to reduce practice effects on repeat testing and also mean that there are no set variants of the tests that can be compared in a group-wise fashion.

Statistical Analysis

Frequentist analyses including mixed models, regressions, correlational analysis, and ICCs were completed in SAS version 9.4. Statistical significance thresholds were set at P ≤.05 (two tailed). The Bayesian statistical analysis was carried out using JASP [ 40 ].

Outliers were identified using the methods recommended by Aguinis et al [ 41 ], first through visual plotting and then confirmed numerically, using a cutoff of 2.24 SD units above or below the mean. One data point was excluded from each of the following assessments: RVPA, RVP Median Latency to Correct Response, PRM Percentage Correct Immediate, and PRM Median Latency Immediate and Delayed (ranging 4.5-6.9 SD units from mean, all acquired during the web-based assessment).

To allow the comparison with test-retest reliabilities commonly reported in the literature [ 3 , 5 , 18 , 42 ], bivariate coefficients were computed to measure the strength of the linear association of outcome measures across test settings. Spearman rank correlations are reported because of the nonnormal distribution of data. To control for variation in the duration between assessments, partial correlations were completed, which examined correlations of test results between settings after covarying for the duration between tests.

However, although the correlational analysis reflects the degree to which paired observations follow a straight line, they do not inform regarding the slope of the line or whether the sets of observations capture the same metric or range of scores [ 43 ]. ICCs were selected as the primary reliability measure, because ICCs assume that the variables investigated share both their metric and variance and incorporate both random and systematic errors when calculating consistency between assessments [ 44 , 45 ]. ICCs therefore account for both consistency in performance (the degree of correlation) between test settings as well as capturing any systematic changes in the mean (the degree of agreement) [ 46 ]. Following guidance by Koo and Li [ 46 ] and justifications outlined in detail in Hansen et al [ 5 ], ICC was calculated based on a single-rating, absolute agreement, two-way random effects model (ICC 2,1 [ 47 ]). ICC coefficients were computed using the %INTRACC macro for SAS [ 48 ]. In line with previous studies and interpretative recommendations for ICC, we used ρ≥.60 to indicate good reliability [ 18 , 27 ].

Mixed effects models simultaneously investigated differences between the test settings (in-person vs web-based) and time (first vs second assessment). Mixed effects models can evaluate multiple factors that affect the structure of the data and allow longitudinal effects (practice and learning effects) to be straightforwardly incorporated into the statistical model [ 49 ]. Outcome measures were entered individually into each model as dependent variables, and 2 mixed effects models were analyzed for each outcome measure. The first model examined only the fixed effects of test setting and time of assessment, with participants entered into the model as a random effect. A second model was used to examine the presence of covariates that may affect test performance across settings, and included additional fixed effects of age, an age-by-setting interaction, and distraction during web-based testing (dummy coded as 1=distracted, 0=not distracted). This second model tested whether age affected performance and interacted with assessment setting to affect test results, and whether distraction during web-based assessment contributed to differences in test results.

The normality of the distribution of residuals was examined, and where required data were transformed before data analysis. Transformations included log transformations for PAL Total Errors Adjusted, SWM Between Errors, OTS Problems Solved on First Choice, and OTS Median Latency to Correct response and square root transformation for PAL First Attempt Memory Score. For most variables, transformations were successful and a linear mixed model was carried out (SAS command PROC MIXED). For PRM-I and PRM-D percentage correct, transformations were not successful. These data were reverse transformed (calculated as the percentage correct subtracted from 100) and were analyzed with mixed models with gamma error distributions and log links (SAS command PROC GLIMMIX).

Evidence in favor of the null hypothesis was examined using a Bayesian approach [ 50 ]. The advantage of using the Bayes factor over classical significance testing is that it provides a comparison of how likely the null hypothesis is compared with the alternative hypothesis [ 31 ]. Bayesian paired samples t tests were conducted, and Bayes factor test statistics were extracted, alongside effect sizes ( δ ) and their 95% credible intervals, contrasting the likelihood of data fitting under the null hypothesis ( H 0 : no difference between test settings) with the alternate hypothesis ( H 1 : that there is a difference between test settings). A default Chauchy prior width of r =0.707 was selected, and a Bayes factor robustness check was completed to examine if the qualitative conclusions changed with reasonable variations to the prior width. Bayes factors (BF 10 ) were interpreted using a classification scheme adopted from Wagenmakers et al [ 51 ]: with Bayes factors below 1 seen as evidence for the null hypothesis (0.33-1: anecdotal evidence; 0.1-0.33: moderate evidence; <0.1 strong evidence for H 0 ), and Bayes factors above 1 seen as evidence for H 1 .

Agreement between test settings was examined with Bland-Altman plots [ 52 ]. These plot the difference between assessments (eg, A − B ) versus the average across paired measures ( A + B /2), along with 95% limits of agreement [ 53 ]. The plots serve as a visual check that the magnitude of the differences is comparable throughout the range of measurement. Distributions of difference scores were assessed using Kruskal-Wallis tests, and where these were nonnormally distributed, raw data were log transformed before plotting and analysis. Other transformations were not considered, as these are not advised for this method of analysis [ 52 , 54 ]. Agreement is considered adequate when 95% of data points lie within limits of agreement [ 52 ]. Proportional bias was examined by regressing difference scores against mean scores to identify the tendency for the difference to increase or decrease with higher score magnitudes [ 55 ].

Test Completion

Full test data were obtained from all participants with the exception of 2 individuals for whom the SWM test terminated early due a large number of errors made during web-based assessment. During in-person assessments, support from the examiner was required on 4 occasions (3 times for volume adjustment during PAL testing and once for additional instruction on the PRM immediate recognition task). Distraction, either through self-report or due to participants tabbing away from the assessment window during web-based assessments, was noted for 16 participants for PAL, ERT, OTS, and PRM-I tests and for 17 participants during SWM, RVP, and PRM-D tests.

Reliability

Bivariate correlation coefficients and ICCs are shown in Table 1 . Spearman correlation coefficients across testing settings ranged from 0.39 to 0.73 ( P <.01). ICCs ranged from 0.23 to 0.67 ( P ≤.05). A total of 5 tests had ICC coefficients meeting the cutoff at ≥0.60, with PAL Total Errors Adjusted just meeting requirements (exact ICC coefficient=0.595, rounded up), and above threshold coefficients for RVP A’, SWM Between Errors, PRM-I Median Latency, and ERT Median Correct Reaction Time. Partial correlations of test results across settings after controlling for the duration between tests produced very similar results. These are shown in Multimedia Appendix 1 .

Reliability analysis for outcome measures of Spearman correlation coefficients and intraclass correlations between test results obtained in-person and in web-based assessments.

a PAL: paired associate learning.

b OTS: one touch stockings of Cambridge.

c PRM-I: pattern recognition memory immediate.

d SWM: spatial working memory.

e ERT: emotion recognition task.

f PRM-D: pattern recognition memory delayed.

g RVP: rapid visual information processing.

Equivalence

Descriptive statistics and results from the mixed model assessing fixed effects of test setting and time are presented alongside the Bayesian analysis results in Table 2 . Mixed models revealed no significant differences between in-person and web-based assessments for performance indices ( P =.10 to .54). However, 3 of the 5 reaction time measures showed differences across test settings (response latencies for PRM-I, PRM-D, and ERT tasks), with web-based assessments yielding slower median response times ( P <.001 to .03). Practice effects were seen for RVP and SWM performance indices, showing improvement on second administration ( P <.01). Response latencies were faster on the second administration for OTS responses ( P =.001).

Descriptive data for outcome variables and statistical results for equivalence analyses. Time at assessment (first vs second assessment) and test setting (in-person or web-based). Mixed effects model and Bayesian t test statistics

Additional fixed effects of age, an age-by-setting interaction effect, and distraction were incorporated into mixed models. Age effects on test performance, showing a decline in test performance with increasing age, were found for all outcome measures with the exception of RVP A’, the percentage of correct responses on PRM-I and PRM-D, and OTS Problems Solved on First Choice. No significant age-by-setting interactions were observed, indicating that test performance did not differ between in-person and web-based testing as a function of age, although there was a trend for slower reaction times on web-based testing for older participants on the PRM-I task (PRM-I Median Latency: F 1,45 =4.01, P =.051; for all other tests F statistic range 0.02-2.49; P =.12 to .90). Effects of distraction were nonsignificant for most tests, but reached or neared significance thresholds for certain reaction time measures (ERT Median Correct Reaction Time: F 1,47 =6.03, P =.02; RVP Median Reaction Time: F 1,46 =3.78, P =.06).

Bayesian analyses supported the null hypothesis ( H 0 : no difference between test settings) over the alternate hypothesis: BF 10 =0.161-0.54) for all performance indices. Applying the classification scheme adopted from Wagenmakers et al [ 51 ], support for the null hypothesis was anecdotal for 3 variables (PAL First Attempt Memory Score, SWM Strategy, and ERT Total Hits), and moderate for 6 other performance indices. No change in the qualitative conclusions was seen with reasonable variations in the prior width. The effect sizes were small (0.15-0.27).

The alternate hypothesis, reflecting a difference between test settings, was supported for 3 out of the 5 reaction time measures (response latencies on PRM-I, PRM-D, and ERT tasks), with support being between anecdotal for the PRM measures (BF 10 =1.60-2.15) and very strong for ERT (BF 10 =512557.32). Effect sizes were in the low-to-large range (0.04-1.69). Moderate support for the null hypothesis was seen for the RVP and OTS reaction time measures.

Bland-Altman plots showed overall good agreement between test settings for performance indices (see Figure 2 , for example, for PAL Total Errors Adjusted). Only 2 performance indices fell short of the requirement that 95% of the data points should lie within limits of agreement (PAL First Attempt Memory Score and SWM Strategy, with 94% and 92% of data points within limits of agreement, respectively). The PAL First Attempt Memory Score showed a proportional bias ( F 1,50 =7.43; P =.009; R 2 =0.13), with lower mean scores being associated with greater difference between measurements ( Figure 3 ). For all other performance measure plots, no bias was seen relating to the test setting, and difference magnitudes were comparable throughout the range of measurements. Performance data from PRM tasks and from SWM Between Errors could not be accurately visualized using Bland-Altman plots because of significant nonnormality of the difference scores that could not be corrected through logarithmic transformation.

An external file that holds a picture, illustration, etc.
Object name is jmir_v22i8e16792_fig2.jpg

Comparability of Paired Associate Learning Total Errors Adjusted across test settings. Density plot for (A) web-based assessment and (B) in-person assessment showing similar distributions; (C) scatterplot with reference line showing linear relationship between assessment settings (ρ=0.54); (D) Bland-Altman plot: mean difference (solid black line) is close to zero, showing no bias; dashed lines delimit limits of agreement. Comparable magnitudes of difference are seen throughout the range of measurements, and 96% of the data within limits of agreement.

An external file that holds a picture, illustration, etc.
Object name is jmir_v22i8e16792_fig3.jpg

Comparison of Paired Associate Learning First Attempt Memory Score across test settings. Density plot for (A) web-based assessment and (B) in-person assessment showing similar distributions; (C) scatterplot with reference line showing linear relationship between assessment settings (ρ=0.45); (D) Bland-Altman plot: mean difference (solid black line) is close to zero, showing no bias; dashed lines delimit limits of agreement. Proportional bias is seen: greater differences at lower mean measurements and 94% of data within limits of agreement.

For reaction time measures, Bland-Altman plots reflected bias in test settings in PRM-I and PRM-D response latencies and ERT Median Correct Reaction Time (eg, Figure 4 ), confirming the findings from the mixed model and Bayesian analyses. Additionally, for all reaction times, 94% of the data points were within limits of agreement, falling short of the 95% cutoff. Visual inspection of the plots confirmed comparable magnitudes of difference throughout the range of measurements, and regression analyses revealed no proportional bias ( R 2 range 0-0.05; P =.12 to .67).

An external file that holds a picture, illustration, etc.
Object name is jmir_v22i8e16792_fig4.jpg

Comparability of Emotion Recognition Task median correct reaction time (in ms) across test settings. Density plot for (A) web-based assessment and (B) in-person assessment, showing broader distribution of timings (range 500-3000 ms) and slower overall timings for web-based assessment compared to in-person assessment (range 500-2500 ms); (C) scatterplot with reference line showing strong linear relationship between assessment settings (ρ=0.73); (D) Bland-Altman plot: mean difference (solid black line) is shifted above zero, demonstrating bias; dashed lines show limits of agreement. Comparable magnitudes of difference are seen throughout the range of measurements, and 94% of the data within limits of agreement.

This study examines the comparability of the widely used CANTAB administered unsupervised via the internet against a typical in-person lab-based assessment, using a counterbalanced within-subjects design. We imposed strict criteria for comparability, including satisfactory intersetting reliability, equivalence, and agreement across test settings. Overall, our results support the comparability of performance indices (errors, trials completed, and response sensitivity) acquired during web-based assessments. Reaction time measures show poorer comparability, with results revealing significant differences and poor agreement between test settings.

Bivariate correlation coefficients between the 2 modes of test administration ranged between 0.39 and 0.73, broadly in keeping with previous research comparing in-person and web-based assessment of other cognitive tasks [ 16 , 18 , 19 ]. The correlations reported here are similar to previously reported test-retest correlations in the CANTAB tests. An overview of test-retest correlations for CANTAB performance indices from previously published papers (and in different test populations) can be seen in Multimedia Appendix 1 .

ICCs were higher for some tests than for others, with fair reliabilities (ICC ρ=0.40-0.49) seen for planning and executive function tasks (SWM Strategy and OTS performance measures). Previous research has shown that cognitive measures are subject to significant intraindividual variation [ 56 ]. A meta-analysis showed that test-retest reliabilities can differ depending on the tests completed and the cognitive functions that they tap into, with lower reliability typically seen for tests assessing executive functions and memory [ 57 ]. Poor reliability was seen for PRM-I percentage of correct trials in this study, which could be attributable to the low variance and high ceiling-level performance on this task in this healthy volunteer sample.

ICCs and Spearman correlations generally provided similar results, but showed greater discrepancy for reaction times, where there was a difference in the range and average between assessment settings. In these cases, ICCs typically presented a tempered correlation coefficient in comparison to Spearman correlations, reflecting that this statistic takes into account systematic error between assessments.

Learning effects are likely to have had an impact on concordance between test settings [ 16 ]. Practice effects with improvement on the second test administration were seen for 4 outcome measures (RVP A’, SWM Strategy, SWM Between Errors, and OTS Median Latency to Correct response). Previous work has shown increased susceptibility to specific tests, in particular those assessing visual memory, to practice effects [ 58 ]. The novelty of a test, particularly in the executive function domain, is also thought to influence susceptibility to practice effects [ 59 ]. Owing to these effects, it is recommended that a familiarization session, to reduce the immediate effect of novelty of tests and testing procedures, is used before baselining cognitive performance in clinical trials and other within-subject designs. Practice effects were not seen for the remaining outcome measures, which may be due to the use of alternate test stimuli [ 57 ]. In most CANTAB tests, stimuli are allocated at random from a broader stimulus pool during each assessment, reducing the likelihood that participants completed the same problems more than once.

Two out of 9 performance indices met all predefined criteria for comparability between measures. PAL Total Errors Adjusted and RVP A’ test scores did not differ between test settings, showed good intersetting reliability, and showed acceptable agreement on Bland-Altman plots. Additionally, for SWM Between Errors, Bland-Altman analyses were not completed, but the intersetting reliability was good, and there was no evidence of performance differences between settings. These measures are therefore determined to have good overall comparability vis-à-vis typical in-person assessment (overview shown in Table 3 ).

The overall assessment of web-based outcome measures with regard to 3 criteria.

a : reliability criterion met where intraclass correlation coefficients ≥0.60.

b : equivalence criteria met where there is no significant difference between performance levels across test settings in mixed effects models, and data supporting the null hypothesis for Bayesian paired t tests).

c : agreement criteria met where ≥95% of data points lie within the limits of agreement on Bland-Altman plots, and there is no evidence of bias or proportional bias.

d PAL: paired associate learning.

e ✓: criteria met.

f x: criteria not met.

g OTS: one touch stockings of Cambridge.

h PRM-I: pattern recognition memory immediate.

i —: analyses not completed.

j SWM: spatial working memory.

k ERT: emotion recognition task.

l PRM-D: pattern recognition memory delayed.

m RVP: rapid visual information processing.

Two additional performance indices were determined to have moderate comparability with respect to in-person assessment. The ERT Total Hits and OTS Problem Solved on First Choice outcome measures showed good equivalence and agreement, but below the threshold reliability indices. For the ERT Total Hits, the ICC fell just short of the imposed threshold (ICC coefficient=0.57).

Overall, none of the 5 web-based reaction time measures met more than one of the predefined comparability criteria, indicating that response latency measures are less easily translated from the lab to the home. Acceptable correlations between in-person and web-based assessments were undermined by a lack of equivalence and agreement between the measures. Correlation coefficients examine the linear relationship and relative consistency between 2 variables (the consistency of the position or rank of individuals in one assessment relative to the other [ 45 ]) rather than the absolute agreement between measurements within individuals [ 52 , 55 ], and are therefore insensitive to differences in metrics or variance ( Figure 4 ).

Differences between settings could be due to a variety of factors. First, web-based assessments were completed on laptop and desktop computers that participants had readily available to them at home or elsewhere. Differences in computing equipment across settings are likely to have had an impact on response times [ 12 ]. Second, additional variance may have been introduced by distractions in the home environment, in comparison with the formal lab-based testing environment. We attempted to monitor and control for distraction and found that distraction more strongly affected reaction time measures during web-based testing. At the same time, all 5 outliers excluded during the current analyses were obtained during web-based assessments. Missing data from 2 participants was due to additional errors during web-based testing on the SWM task, which precluded the accurate calculation of test performance scores. Susceptibility to distraction and resultant increases in variance of test outcome measures are important to bear in mind when considering web-based testing as a substitute for, or in addition to, in-person testing.

Limitations

The use of a healthy, relatively young, and highly educated sample may limit the generalization of findings to lesser-educated, clinical, or old-age samples. This research suggests that for the examined CANTAB performance indices, web-based assessments are likely to be a suitable alternative for similar samples. Further examination of the comparability of web-based assessment is now required in populations of clinical interest. In the longer-term, participants and patient groups with access restrictions may be the ones who benefit most from remote testing.

The study examined only the reliability of tests across settings and different devices, since all in-person tests were completed on touch screen iPads, and all web-based assessments on personal computers or laptops. Further research is required to examine whether reaction time data may be collected more consistently, where similar or the same devices are used across settings. Since the completion of this study, variance in workstation information is now routinely collected for CANTAB web-based tests, which allows for better determination of the effects of different workstations on test performance.

It is not clear how computer/device experience may have interacted with our results because we did not collect this information. However, our participants were recruited via Facebook, screened for inclusion online, and tested at home using their personal computing system, so it is likely that they had at least modest computer experience. Discrepancies between lab-based and web-based remote testing may be amplified for individuals with less computer experience, who may need to rely on the support of study staff to a greater extent.

The study was powered to detect moderate differences between test settings and was not adequately powered to identify subtle differences. Bayesian statistics were able to qualify the level of support for the null or alternate hypothesis, but much larger samples would be required to determine stronger evidence for the null hypothesis. Replication in a larger sample is now required to examine for the presence of any subtle differences between test settings.

Further work is now required to examine test-retest reliability for web-based assessments to identify whether test reliabilities are similar to those obtained during repeated in-person assessments. Our data show intersetting reliabilities, which are similar to previously reported test-retest reliabilities obtained during in-person assessments. Automated test scoring of performance indices, standardized across test administration and testing platforms, circumvent problems with rater-based variances in reliability. However, differences in computer hardware and software can impact reaction time data, and this must be borne in mind during web-based neuropsychological assessments.

Overview and Implications

This study compared web-based CANTAB tests with gold-standard in-person administered lab-based assessments. Performance indices obtained in person showed broad equivalence, good agreement, and significant linear relationships with those obtained during web-based assessments. Overall, this study provides evidence for the comparability of a range of performance outcome indices examined using web-based testing in a healthy adult sample. Certain performance indices showed better comparability than others and should therefore be preferable for use where comparability with typical in-person assessment is needed. Reaction time indices were not found to be comparable, and greater care is required in the interpretation of web-based latency results in relation to typical in-person assessments.

Acknowledgments

This study was financially supported by Cambridge Cognition, a digital health company specializing in computerized cognitive assessment, including CANTAB.

Abbreviations

Multimedia appendix 1.

Conflicts of Interest: All authors are employed by Cambridge Cognition and have no other conflicts of interest to declare.

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of Cantab in English

  • anti-university
  • business school
  • hall of residence
  • Panhellenic
  • pass degree
  • the groves of academe idiom
  • unselective
  • vice chancellor

Translations of Cantab

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

Your browser doesn't support HTML5 audio

a person who only eats food produced or prepared in a way that does not harm the environment

Forget doing it or forget to do it? Avoiding common mistakes with verb patterns (2)

Forget doing it or forget to do it? Avoiding common mistakes with verb patterns (2)

what does phd cantab mean

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • English    Noun
  • Translations
  • All translations

Add Cantab to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

What does PhD Cantab mean?

User Avatar

A student or graduate of Cambridge University.

Add your answer:

imp

When was Jupiter Cantab created?

Jupiter Cantab was created in 1982.

When did Jupiter Cantab end?

Jupiter Cantab ended in 1983.

What is Cambridge known as?

What is cambridges nickname, what does the suffix ph.d mean.

PhD stands for Doctor of Philosophy. A PhD is one of the highest degrees a person can earn.

What does PH mean after a doctors name?

A PhD is a Doctor of Philosophy.

What would you think this would mean if it was on a vanity license plate - phdsqrd?

PhD squared - the person has 2 PhD's (as if one weren't enough)

Do you use doctor if Ph.D is listed?

If you mean in a sentence, than, "The Doctor has a PhD in medication."

What is an Honorable PhD?

There is no such thing as an "Honorable" PhD. I think you mean "Honorary". An Honorary degree is one that is awarded "for the sake of the honor" and not because the person has studied for it or passed any examinations for it.

Meaning of PHD?

PhD means fLIck PhD means fLIck

What does the abbreviation Ph.D mean in gold ring?

phd on a 10 k gold ring

How do you do PhD in water?

How do I do phd water

imp

Top Categories

Answers Logo

  • To save this word, you'll need to log in. Log In

Definition of Cantab

Word history.

by shortening

1697, in the meaning defined above

Dictionary Entries Near Cantab

Cite this entry.

“Cantab.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/Cantab. Accessed 6 Mar. 2024.

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

8 grammar terms you used to know, but forgot, homophones, homographs, and homonyms, your vs. you're: how to use them correctly, every letter is silent, sometimes: a-z list of examples, more commonly mispronounced words, popular in wordplay, the words of the week - mar. 1, 10 scrabble words without any vowels, 12 more bird names that sound like insults (and sometimes are), 8 uncommon words related to love, 9 superb owl words, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

IMAGES

  1. CANTAB Connect for Clinical Trials

    what does phd cantab mean

  2. CANTAB tests : adjusted mean change (LSmeans) at end of treatment

    what does phd cantab mean

  3. Cantab Test Review

    what does phd cantab mean

  4. What does Cantab mean?

    what does phd cantab mean

  5. Adjusted mean leukocyte telomere length adjusted for tertiles of six

    what does phd cantab mean

  6. How to get a PhD: Steps and Requirements Explained

    what does phd cantab mean

VIDEO

  1. What Does "In The Spirit" Mean?

  2. Sunday Worship Service: February 4, 2024

  3. Acute Abdominal Pain: Types, Causes, Symptoms, and Treatment

COMMENTS

  1. Cantab

    Cantab definition: 1. abbreviation for Cambridge: used after someone's name and qualifications to show that they have…. Learn more.

  2. What does (Cantab) mean?

    Cambridge. Usually used to denote that a degree is from Camrbidge, eg MA (Cantab.) Of Cambridge (short for catabrigiensis, from Cantabrigia, the Latin form of Cambridge) Thats alot! It seems to make a lot of sense that I have often seen it after headteacher's names! The University is the third oldest University in England, being set up in 1834 ...

  3. Cantab

    Cantab may refer to: Cantabrian, a demonym for Canterbury, New Zealand. Cantabrigian, a demonym for people from: Cambridge, England. The University of Cambridge. Cambridge, Massachusetts. Harvard University. Cantabrigian Rowing Club (Cantabs), a rowing club in Cambridge, England. Cantabrigiensis (Cantab.), a postnominal suffix for a degree from ...

  4. Master of Arts (Oxford, Cambridge, and Dublin)

    The abbreviated name of the university (Oxon, Cantab or Dubl) is therefore almost always appended in parentheses to the initials "MA" in the same way that it is to higher degrees, e.g. "John Smith, MA (Cantab), PhD (Lond)," principally so that it is clear (to those who are aware of the system) that these are nominal and unexamined degrees.

  5. British degree abbreviations

    Bachelor's level qualifications. These qualifications sit at level 6 (bachelor's level) of the Framework for Higher Education Qualifications and are first cycle (end of cycle) qualifications under the Bologna Process.. Most British bachelor's degrees are honours degrees and indicated by putting "(Hons)" after the degree abbreviation. A student achieving a pass grade, below honours standard ...

  6. cantab abbreviation

    James Cox MA ( Cantab) Word Origin from Latin Cantabrigiensis, from Cantabrigia 'Cambridge'. Definitions on the go. Look up any word in the dictionary offline, anytime, anywhere with the Oxford Advanced Learner's Dictionary app. Check pronunciation: Cantab. Nearby words. cant verb; can't ...

  7. Cantab, n. & adj. meanings, etymology and more

    What does the word Cantab mean? There are two meanings listed in OED's entry for the word Cantab. See 'Meaning & use' for definitions, usage, and quotation evidence. ... Revisions and additions of this kind were last incorporated into Cantab, n. & adj. in July 2023. Earlier versions of Cantab, n. & adj. were published in: OED First Edition ...

  8. etiquette

    It could be taken to mean that your PhD from CalTech was a high point of your career and you want to make sure people know that. Depending on the context, that may be a bad or good thing. ... These people might list their first degree as "MA (Oxon.)" or "MA (Cantab.)", listing the university to distinguish it from a "real" Masters degree (and ...

  9. Using (Oxon) and (Cantab) titles...? on PostgraduateForum.com

    Hi all, I'm starting a PhD in October and they've asked for a "about me page". All of the current students use their academic credentials in their name (e.g. Joe Blogs BSc (Hons) MSc). In the UK, you see (Oxon) and (Cantab) to denote that your degree was gained from Oxford or Cambridge, respectively.

  10. The Cambridge MA

    The Cambridge MA. In most UK universities, the Master of Arts is a degree awarded by examination. At Cambridge, the MA is conferred by right on holders of the BA degree of the University and on certain other senior members and is not available as a postgraduate qualification. Possession of the MA, or any other Cambridge masters degree or ...

  11. What Does 'PhD' Stand For?

    A PhD is a terminal academic degree students typically pursue when they're interested in an academic or research career. A PhD is the highest possible academic degree a student can obtain. PhD stands for "Doctor of Philosophy," which refers to the immense knowledge a student gains when earning the degree. While you can actually get a PhD in ...

  12. Digital cognitive assessments

    CANTAB® assessments provide scientifically validated, highly sensitive, precise and objective measures of cognitive function, correlated to neural networks. Keep your participants engaged with our simple and user-friendly tasks, whether in-clinic or at home. Our neuroscientists and consultants work with you to build a battery of CANTAB ...

  13. Cambridge Neuropsychological Test Automated Battery

    The Cambridge Neuropsychological Test Automated Battery ( CANTAB ), [1] originally developed at the University of Cambridge in the 1980s but now provided in a commercial capacity by Cambridge Cognition, is a computer-based cognitive assessment system consisting of a battery of neuropsychological tests, administered to subjects using a touch ...

  14. Cantab goes part-time

    Cambridge University has established a new postgraduate degree designed for those in work. It is called the Master of Studies, and is Cambridge's first major step into part-time degrees. Cambridge established a part-time Master of Education in 1992, but this caters for a limited group. The new degree, co-ordinated by the board of continuing ...

  15. Does Cantab mean Cambridge?

    The origin of the term "Cantab" can be traced back to the medieval Latin name for Cambridge, Cantabrigia. The name was derived from the Anglo-Saxon name Cantebrigge. In Cambridge, Massachusetts, the name "Cantabrigia" appears in the city seal and is abbreviated as "Cantab" in the seal of the Episcopal Divinity School located there.

  16. Comparing Web-Based and Lab-Based Cognitive Assessment Using the

    CANTAB test structures are identical for each administration, across both in-person and web-based assessments. However, for most CANTAB tests (OTS, PAL, RVP, PRM, and ERT), stimuli are allocated at random from a broader stimulus pool during each assessment, making it unlikely that participants complete the same problems more than once.

  17. Cantab

    Cantab meaning: 1. abbreviation for Cambridge: used after someone's name and qualifications to show that they have…. Learn more.

  18. What does PhD Cantab mean?

    What does the suffix Ph.D mean? PhD stands for Doctor of Philosophy. A PhD is one of the highest degrees a person can earn.

  19. PhD Student vs. Candidate: What's the Difference?

    A PhD student is different from a PhD candidate in that the student is still working through the coursework. They have not yet begun the dissertation process or passed the qualifying exams. A PhD student may also be in the process of taking the qualifying exams, but not yet finished with them. Many people believe that earning a doctorate degree ...

  20. Doctor of Philosophy

    A Doctor of Philosophy (PhD, Ph.D., or DPhil; Latin: philosophiae doctor or doctor philosophiae) is the most common degree at the highest academic level, awarded following a course of study and research. The degree is abbreviated PhD and sometimes, especially in the U.S., as Ph.D. It is derived from from the Latin Philosophiae Doctor, pronounced as three separate letters (/ p iː eɪ tʃ ˈ d ...

  21. Cantab Definition & Meaning

    The meaning of CANTAB is cantabrigian. "Cantab." Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/Cantab.Accessed 27 ...

  22. Dr Teo Ee Der, Ada, BA, MBBChir (Cantab), PhD (Cantab ...

    My research focussed on the molecular genetics of hypertension, in particular aldosterone-producing adenomas and the discovery of contributory genes and mutations. We discovered a subtype of these with hallmark somatic mutations and a genotype-phenotype correlation, often presenting with resistant hypertension.

  23. List of post-nominal letters (United Kingdom)

    This is a list of post-nominal letters used in the United Kingdom after a person's name in order to indicate their positions, qualifications, memberships, or other status. There are various established orders for giving these, e.g. from the Ministry of Justice, Debrett's, and A & C Black's Titles and Forms of Address, which are generally in close agreement; this order is followed in the list.

  24. What Does Taylor Swift's Bonus Track "The Black Dog" Mean?

    Taylor's other album variants are The Manuscript, The Bolter, and The Albatross, and fans have been busy analyzing those, too.And, yes, the theories are definitely Joe Alwyn-coded. Oh, and in case ...