Menu

Exam Development

The exam is developed by a group of committees comprised of experts in child life. These committees are part of the Child Life Certification Commission (CLCC) and include the Item Writing, Item Review, and Exam Assembly Committees.  In addition to these committees, CLCC assembles panels of Subject Matter Experts (SMEs) for other exam development tasks such as the Job Analysis.

To get involved, CCLSs may volunteer to participate on CLCC committees.

The procedures described below are widely accepted for developing reliable and content-valid examinations.  Multiple reviews by content and psychometric experts and the use of stringent criteria strengthen the validity of the test.  Continuous evaluation of each examination’s reliability maintains the consistency of the test to measure examinees’ skills accurately. 


Job Analysis

A periodic Job Analysis defines child life practice by delineating the domain, tasks, and knowledge and skills necessary for competent practice as an entry-level child life specialist. From this assessment, content areas and their appropriate relative emphases are defined. A survey of child life practitioners validates the content outline.  The Exam Content Outline is the result of this process.

A Job Analysis is an essential step in safeguarding consumers as it is a methodology for defining the characteristics of an individual who has the capacity to perform competently.

It is essential that examination content be reviewed periodically to ensure that existing outlines continue
to cover the knowledge, skills, and abilities required for competent practice in the field.  CLCC conducts Job Analyses approximately every 5 years.

Item Writing and Review

Item Writers are charged with the development of new items (questions) that adhere to the Exam Content Outline for inclusion in the Item Bank. 

Each item must have at least one published reference in which the correct answer to the question may be found.  Item writers are provided a list of approved references and in turn identify at least one publication including page numbers to validate each item they write.

It is critical that items have only one correct option.  The distracters (wrong answers) must clearly be wrong.  Degrees of correctness are important and can assess candidates’ ability to exercise professional judgement, but the key (correct answer) must be the most correct answer and not debatable. Each of the distracters must be plausible, but inaccurate.

Item Reviewers apply stringent criteria in evaluating the new items:

  • the content of each item must be well-documented and representative of best practice;
  • items must be clearly written and easily understood;
  • the correct answer must be accurately identified;
  • the distracters must be plausible, but indeed wrong; and
  • items must not discriminate against one group of examinees or another.

Once the items are developed, reviewed, and validated, they are subjected to psychometric and editorial reviews. The final versions of the items are entered into the Item Bank for possible use on future examinations.

Exam Assembly

The Item Bank is the repository for all the questions used on the certification exam.  Items are drawn from the bank by a test development specialist and are assembled into preliminary versions of the examination as specified in the Exam Content Outline.

Draft forms are reviewed by psychometric experts to ensure their psychometric integrity.Performance data is collected for all items used on the certification exam.  This helps judge the difficulty of each item as well as its ability to differentiate between candidates who meet the set standard and those that do not. Using the statistical item analyses, inappropriate or questionable items are either revised or omitted from the examination.

Draft forms are reviewed by subject matter experts for technical accuracy.  The Exam Assembly Committee convenes annually to assemble forms for the upcoming year.  This process is completed via a series of web conferences facilitated by our testing partner.  Committee members receive draft exam forms prior to each session with updates being made in the interim based upon the discussion on the previous call. 

Although each item was previously judged to be psychometrically sound, fair, and content-valid, the same review process used in item development is used to give the questions one final review before being presented to candidates.

Finally, the forms receive an editorial review before they are finalized.

Scoring

Of the 150 questions on the exam, 25 are pretest items and are not weighted (not counted toward candidates’ scores). Piloting items in this manner allows items to be analyzed before they become weighted and provides verification that the items contribute toward measuring a candidate’s proficiency in the material and are not irrelevant or poorly written. Candidates answer the pretest items, and then performance data is used in a statistical analysis to determine if the questions perform as intended. If so, their ability to contribute to a test’s quality is verified and they can be considered for inclusion on future exams as scored items.

Each exam form is equated to ensure that candidates are neither penalized nor rewarded if an exam form is more difficult or easier than any other version of the exam. Collecting performance data enables the test administration agency to conduct this important process. Equating is the process that allows candidates to receive their scores onsite unless a Cut Score Study is required; these are normally only conducted immediately following a Job Analysis.

Candidates receive preliminary results before leaving the testing center. The results are subjected to a statistical analysis, and there is a very slight chance that they will change. You will be notified if there is a change to your score. Once your results are posted in your ACLP Online User Profile, they are considered final.

The purpose of the Child Life Professional Certification Exam is to measure an individual’s performance compared to a specific criterion (the established passing score as defined by subject matter experts), this is called a criterion-referenced exam.

This type of exam is not intended to measure a range of levels of competence, but rather to confirm that an individual meets a minimum level of competence. Like most credentialing exams, the objective of the Child Life Professional Certification Exam is to measure candidates’ comprehension of the body of knowledge identified in the Practice Analysis Study, not to compare it with that of other candidates.

The goal of criterion-referenced assessments is to measure performance most precisely in a narrow range near the passing score. The more precise the exam is in this range, the less suitable it is for assessing aptitude at levels above/below the passing score. As a result, it would be inappropriate to use test scores to rank individuals.

For this reason, CLCC does not release test scores above the passing score. If a candidate reaches or exceeds the passing score, he or she will not receive their numerical score, only the information that they have passed the exam. This is done in part to prevent the improper ranking of individuals by stakeholders such as prospective employers.

Candidates who do not meet the cut score will receive their scaled numerical score as well as the percentage of correct answers they provided for each of the three domains.

Large differences in domain-specific success may indicate areas of strengths and weaknesses. However, domain-level scores are not a reliable predictor of future performance. This information is provided for descriptive information only.  Domain-specific scores should be interpreted with caution as the Child Life Professional Certification Examination is not scored by section and the difficulty level of items in each domain may vary slightly. Domain-specific results are based on small subsets of the overall content of the exam.  As a result, they are less reliable than the official scaled score.  Further, the domain-specific results are not comparable from one exam form to another as forms are equated to allow for variances in difficulty so that candidates are not rewarded or punished based on the exam form’s difficulty level. 

The overall scaled score relevant to the passing standard is the most accurate representation of performance on the exam.


2019 Certification Exam Content Outline


Learn More

Certification