PCSAS application materials are submitted electronically.
Potential applicants must first submit a Letter of Intent to apply, in which they indicate their interest in applying for accreditation and provide sufficient preliminary information to allow for a determination of whether they meet PCSAS’s eligibility standards for applying. Prior to submitting a Letter of Intent, potential applicants are encouraged to contact PCSAS with questions or concerns, and request an “Initiation Packet” for the Letter of Intent. Such inquiries are without costs or obligations and should be made to PCSAS Executive Director Joe Steinmetz at jsteinmetz@pcsas.org; 479-301-8008.
The Letter of Intent is reviewed by two independent reviewers who determine eligibility. If these reviewers cannot agree, a third reviewer reads the material and a majority vote determines the decision.
An program deemed eligible will have up to one year to submit a completed application, although extensions may be granted.
A program deemed ineligible may appeal to the full PCSAS Review Committee, beyond which there is no other appeal. Ineligible programs must wait at least one year before submitting a new Letter of Intent.
As with a Letter of Intent, programs are encouraged to maintain contact with PCSAS Executive Director Joe Steinmetz at jsteinmetz@pcsas.org; 479-301-8008 throughout the application process.
Interested programs must satisfy the following minimal requirements in order to be judged eligible to apply for PCSAS accreditation.
– The scope of PCSAS accreditation is limited to doctoral training programs that grant Ph.D. degrees in psychology with a core focus on the specialty of psychological clinical science. Programs must be housed in departments of psychology (or their equivalent) within accredited, nonprofit, research universities in the U.S. and Canada.
– Accreditation is limited to programs that subscribe to an empirical epistemology and a scientific model–i.e., an educational and clinical training model in which the advancement of knowledge and its application to problems are driven by research evidence, and in which research and application are integrated and reciprocally informing.
– Accreditation is limited to Ph.D. programs with a primary mission of providing all students with high-quality, science-centered education and clinical training that arms them with the knowledge and skills required for successful careers as clinical scientists, broadly defined.
– Accreditation is limited to programs within the intellectual and educational domain of clinical psychology. This may include hybrid varieties, such as health-psychology, clinical-neuroscience, clinical-behavioral genetics, etc. However, to be acceptable the hybrid model must involve the integration of clinical psychology–i.e., a focus on psychological knowledge and methods to research and clinical application relevant to mental and behavioral health problems–with one or more complementary scientific perspectives for the purpose of gaining added leverage on specific target problems. In all cases, clinical psychology must be the core component of the model.
– Accreditation is limited to programs with the primary goal of producing graduates who are competent and successful at (a) conducting research relevant to the assessment, prevention, treatment, and understanding of health and mental health disorders; and (b) using scientific methods and evidence to design, develop, select, evaluate, implement, deliver, supervise, and disseminate empirically-based clinical assessments, interventions, and prevention strategies.
– In their Letters of Intent and in public documents, potential applicants must demonstrate a commitment to providing an education within the boundaries that define PCSAS accreditation -i.e., in scope, epistemology, mission, goal, and domain.
– Potential applicants must agree to conduct a detailed self-study prior to preparing an application, and to provide an accurate summary of the self-study’s results in their application materials. Each program must agree to full disclosure of all information the Review Committee requires in order to carry out its responsibility of evaluating programs and reaching accreditation decisions.
– Applicants must agree to arrange, coordinate and complete a site visit of their program after submitting the application and prior to the scheduled Review Committee review.
– Applicants must have paid the non-refundable application fee and have signed the PCSAS Applicant Agreement prior to the review of their application.
– Finally, applicants must agree to accept the Review Committee’s decision as specified in the Applicant Agreement. However, the decision process may include an appeal in keeping with PCSAS procedures.
Because the Review Committee evaluates applications only from programs that explicitly assert they fit within the defined scope of PCSAS accreditation and that they satisfy PCSAS’s standards, the Review Committee’s task essentially is one of evaluating each program’s integrity and quality. To accomplish this the Review Committee rigorously and objectively examines the evidence from each program’s application materials and its site visit report to assess how well the program matches PCSAS standards. Also included is whether the program’s public declarations, such as its program handbook, website statements, etc., are in keeping with a PCSAS clinical science model. The Review Committee makes qualitative evaluations of each program in seven general areas:
1) Conceptual foundations: To be eligible for review, each applicant program will have endorsed the epistemology, mission, goals, and domain that define PCSAS accreditation. Because a hallmark of PCSAS accreditation is flexibility, programs are given leeway to develop their own distinctive and innovative approaches to translating these core concepts into practical, effective, real-world doctoral programs because PCSAS believes that the field and the public benefit from diversity in how clinical science training is accomplished. This diversity may reflect taking advantage of particular local resources and opportunities, as well as pursuing efforts to move the field forward with well-conceived training innovations.
2) Design, operation, and resources: The Review Committee examines: (a) the quality, logic, soundness, and coherence of each program’s overall operation: (b) its stability; educational plan and pedagogical approach; (c) its content and curriculum; administration; and (d) the availability and use of resources. The Review Committee also evaluates how effectively the program’s design and resources are channeled toward achieving the program’s goals.
3) Quality of the science training: The Review Committee evaluates the overall quality of the scientific content, methods, and products of the program’s doctoral training and education; i.e., how well the program embodies the very best, cutting-edge science of the discipline).
4) Quality of the application training: The Review Committee evaluates the extent to which clinical training is based on science/application integration that prepares program graduates to function as independent providers of clinical services and assume responsibility for patient care by making clinical decisions based on the best available scientific evidence.
5) Curriculum and related program responsibilities: PCSAS accreditation requires that training programs demonstrate that their students have the necessary breadth and depth of knowledge and training experiences to engage in high-quality clinical science scholarship, research, and clinical applications. Programs must clearly articulate their training goals; present a coherent training plan by which students will obtain the necessary breadth and depth of knowledge and experience (e.g., courses, workshops, practica, laboratory rotations); and describe the ways that they will ensure that students have achieved these goals. In addition, programs must ensure that ethical standards and concern for diversity are reflected in training for scholarship, research, and clinical applications as well as in program characteristics and policies (see below).
a) Ethics. PCSAS accreditation requires that programs provide training in all relevant codes of ethical behavior and legal and regulatory requirements for scholarship, research, and clinical application, including those nationally recognized professional ethics codes pertinent to psychological clinical science. Clinical science training programs must ensure that relevant ethical standards are integrated into all major aspects of clinical science training, including didactic experiences, applied training, and research. Such integration should promote the production and application of clinical science that is fair and compassionate, reflecting the fundamental principle of beneficence by carefully protecting and promoting the well-being of clients, research participants, and colleagues.
b) Diversity. PCSAS accreditation requires that programs hold diversity, equity, and inclusion as essential values. Programs must attend to all dimensions of human diversity, including but not limited to race, color, ethnicity, age, gender, gender identity, sexual orientation, socioeconomic status, marital status, national origin, disability, beliefs, and culture, as well as how those identities and others may intersect. These dimensions warrant attention in terms of the scholarly content of instruction, the demographics of members of the program and the clients it serves, and the climate the program promotes, which in combination contribute to the strength of PCSAS programs and the value of the training they provide.
6) Quality improvement: The Review Committee examines the program’s investment in continuous quality improvement to determine: on-going critical self-examination; openness to feedback; flexibility and innovation; monitoring of program results; and engagement in strategic planning as the field changes in response to the dynamic mental health care environment. The Review Committee expects each program to monitor its design, operations, and outcomes, and to use these data to pursue excellence and strategically plan for the future.
7) Outcomes: The Review Committee’s evaluations place the greatest weight on each program’s record of success: To what extent do the activities and accomplishments of a program’s faculty, students, and graduates – especially its graduates from the last ten 10 years – exemplify the kinds of outcomes one expects of programs that successfully educate high-quality, productive psychological clinical scientists? Included here are graduates’ ongoing contributions to research and to broad dissemination of science-based practice.
For each applicant program, the Review Committee examines, integrates, and evaluates all the evidence across these seven areas, makes a qualitative rating, and then decides whether the program will be awarded PCSAS accreditation.
The following are the types of information considered by the Review Committee in its evaluation of a program’s performance in the seven areas outlined in C. General Accreditation Standards. These examples are only for purposes of illustration, and should not be construed as a checklist of the criteria by which a program would be assured of accreditation. Quality of execution is crucial in satisfying these criteria. Moreover, because PCSAS does not take a “one-size-fits-all” approach to accreditation, it is conceivable that an innovative program might not match all of these criteria in conventional ways, yet still be evaluated favorably by the Review Committee. The key point is that the burden of proof regarding the merits of a program rests on the applicant. Each program must demonstrate convincingly that it has a successful record of offering high-quality doctoral education and clinical training in psychological clinical science.
1) Conceptual foundations:
a) Does the program offer a clear, rational explication of its mission, goals, philosophy, and epistemology?
b) Are the program’s conceptual foundations logically coherent, internally consistent, and compatible with PCSAS standards?
c) Do the program’s conceptual foundations have clear implications for the program’s design, operation, climate, and outcomes?
d) Does the program’s mission statement permit a reasonable assessment of the program’s success at achieving its mission?
e) Do the program’s goal statements include proximal and distal objectives that can be translated into observable, measurable outcomes?
f) Are the program’s philosophy and epistemology clearly related to the program’s design and operation? Are they consistent with PCSAS’s mission and standards?
g) Are the activities of the faculty, students, and graduates consistent with the program’s conceptual foundations?
2) Design, operation, and resources: The program’s design, operation, and use of resources should contribute to the program’s realization of its mission and goals. The following topics illustrate the information of particular interest to the Review Committee:
a) Student recruitment, selection, and mentoring:
i) How does the program recruit its students? Is there evidence that these methods are appropriate, efficient, and successful?
ii) What are the program’s procedures for selecting graduate students? Is there evidence that these are appropriate, valid, and successful?
iii) How well do the scientific interests and career goals of matriculating students match the program’s mission, goals, and design? Is there evidence of “fit?”
iv) Do selection procedures yield high-quality students with strong educational backgrounds in science, research experience, and scientific interests and aptitudes? What is the evidence?
v) Mentoring Model: What is the program’s model for providing the kind of intensive mentoring of individual students that is required to produce first-rate clinical scientists who integrate research and clinical application?
vi) How is the faculty mentoring organized? Are students distributed among the faculty in a manner that promotes high quality training? Are the interests of matriculating students matched to their mentors’ expertise? What is the evidence that the program’s mentoring model is successful?
vii) How long does it take for a student to complete the program, on average, is that time justified by the program objectives, and what is the distribution for time-to-completion? What factors influence this pattern, and is it reasonable?
viii) How does the program deal with underperforming, troubled, or discontented students? Is there a formal system for communicating performance feedback and handling grievances? How well do these work?
ix) Specific information that the Review Committee would examine to evaluate these issues would include:
( 1) the credentials (e.g., GPA, GRE, undergrad majors/curricula, research work, publications, etc.) of applying, admitted, and matriculating students over the past ten years;
(2) student/faculty ratios, and distribution of students across mentors and content areas;
(3) sources and distribution of financial support for doctoral students;
(4) records of student progress, the evaluation process, and history of dealing with underperforming students; and
(5) formal grievance policies, procedures, and implementation.
b) Curriculum design: Meritorious clinical science training is not restricted to one particular set of courses, training methods, or content areas. Rather, it is assumed that there are multiple ways to reach common goals. Thus, it is up to each program to specify its goals; to develop a clear plan for achieving these goals; to devise a curriculum that gives individual students the necessary flexibility to tailor their training to their specific goals; to identify appropriate benchmarks for assessing the curriculum’s results; and to relate performance on these benchmarks to the overall goal of providing high caliber education and training in psychological clinical science. Because broad areas of science may be relevant to the advancement of psychological clinical science, programs are encouraged to design curricula that promote integration, innovation, collaboration, and exploration across diverse areas of psychology and other sciences. The Review Committee will be interested in examining the following aspects of the curriculum:
i) How is the curriculum structured? What are the pedagogical rationale, logic, and specific goals of this structure?
ii) How flexible, individualized, and integrative is the curriculum design. To the extent that it is individualized, how are decisions made, and by whom?
iii) What are the key indicators of progress and success (or difficulty and need for attention), and how are these monitored as individual students move through the curriculum?
iv) What are the common, critical milestones, and how are they designated, observed, and measured?
v) How well does the curriculum’s design serve the program’s mission and goals? What is the evidence of its achievements?
vi) How are research and clinical application integrated in this curriculum?
vii) Possible indicators of interest to the Review Committee:
(1) the core curriculum, its timeline, and its degree of breadth and flexibility;
(2) course syllabi and instructors;
(3) opportunities and procedures for individualizing the educational experience;
(4) methods for monitoring and evaluating the progress of individual students;
(5) sample exams in critical courses and sample qualifying/prelim exams;
(6) student academic awards.
viii) In addition, although there are few specific course requirements for PCSAS accreditation, the Review Committee will look for evidence that the program:
(1) provides effective training in the major areas of psychological clinical science–psychopathology and diagnosis, broadly conceived; clinical assessment, measurement, and individual differences; and prevention and intervention;
(2) allows for individualized training; and
(3) stays abreast of the evolving knowledge base in psychological science. Although the Review Committee does not insist that students acquire expertise through specific required courses, the Review Committee will expect clear evidence of students’ expertise. Each applicant has the opportunity and responsibility to make this case.
c) Research training: One of the primary missions of PCSAS-accredited doctoral programs is to train psychological clinical scientists who will be able to generate new knowledge relating to mental and behavioral health problems. Therefore, programs must demonstrate that its students conduct meaningful research as a focal part of their graduate education. Some key indicators of the quality of research training would include:
i) Is research training a core of the program? Are students actively involved in scientific research throughout their graduate education?
ii) Is the student’s research training integrated meaningfully with all other aspects of the student’s training–e.g., coursework, clinical application training, teaching experiences?
iii) Do students receive individualized mentoring in faculty laboratories?
iv) Are students the authors and co-authors of high quality research presentations and peer reviewed publications?
v) Are all students required to demonstrate a solid grasp of research and quantitative methods?
vi) Do all students demonstrate a solid understanding of the important knowledge base and theories across diverse areas of psychological science and other sciences, and does this understanding inform their own research?
vii) Do students produce high-quality dissertations that help launch their careers and that advance psychological science?
viii) Do graduates function as productive research scientists?
ix) Possible indicators:
(1) student research products, grants, presentations, publications, awards;
(2) evidence of student involvement in research, such as research courses taken, specific skills acquired; and
(3) research involvement after graduation, such as appropriate post-doctoral positions (including but not limited to tenure-track faculty positions emphasizing research productivity), grants, publications, and awards.
d) Clinical application training: Because psychological clinical science is an applied science, it requires that doctoral students acquire a deep and thorough understanding of the clinical phenomena that will be the central focus of their scientific careers. Graduates must be able to function as independent clinical scientists, able to assume clinical responsibility for patients with problems in their areas of expertise. Thus, they must be trained to a high level of professional competence in the most cost-effective, efficient, empirically supported procedures for the clinical assessment and treatment of specific populations and problems, and also must be capable of training and supervising others in these clinical procedures, where appropriate. Students must acquire clinical competence through direct application training, including well organized and monitored, science-based practicum and internship experiences. Innovative approaches to the design and implementation of the applied training are encouraged, with the aim of improving the effectiveness and efficiency of the clinical training; however, programs are expected to provide evidence that such innovations achieve or exceed the intended results. Clinical science training in applications should be characterized by:
i) a clear scientific evidence base for the assessments and interventions taught;
ii) an integrated focus on consistent evidence-based principles and processes across both research and applied activities; and
iii) a meaningful assessment of clinical skill acquisition in specific research-supported procedures for specific problems.
iv) Clinical scientists also should be prepared to select, train, supervise, and evaluate mental health workers from multiple disciplines in the clinical application of research-supported treatments. They should also be expected routinely to gather, analyze, and interpret data on the procedures and outcomes associated with their applied activities. Because PCSAS accreditation is outcome focused, there are few requirements regarding specific coursework or other specific forms of applied training experiences that must be provided across all accredited programs. However, the training should produce clinically competent, license-eligible graduates. Possible indicators of interest to the RC would include the following:
(1) Is clinical training a core of the program? Are students actively involved with clinical issues throughout their graduate education?
(2) Do students receive individualized on- or off-site training whereby they gain direct, first-hand, mentored experiences with specific science-based procedures and with populations or problems that are relevant to ensuring their competence in core areas of clinical science application–i.e., clinical assessment, diagnosis, prevention, treatment, supervision, and program evaluation?
(3) Do the sequencing, design, and amount of these applied experiences adequately prepare the student to function as an independent and integrative clinical psychologist who are competent to work in applied settings and also to function as productive research scientist?
(4) Examples of possible proximal indicators:
(a) syllabi of relevant coursework and training experiences demonstrating the integration of science and application;
(b) performance evaluations for applied clinical science activities, such as empirically based practicum experiences and high-quality internships;
(c) relevant performance samples and examinations;
(d) student publications, grants, and presentations related to clinical applications and their scientific bases;
(e) outcome data from application research;
(f) results on licensure examinations.
(5) Examples of possible distal indicators:
(a) evidence of leadership and training roles in the delivery of empirically supported applications;
(b) specific contributions to improving public health, including the development of new or more effective interventions, evaluations of interventions, dissemination of interventions, cost-effectiveness analyses, health-care policy analyses, etc.;
(c) evidence of public health impact, including papers and publications, media reports, grants, awards, citations, outcome data;
(d) records of use of science-based applications, and
(e) involvement in advancing science-based clinical application through teaching, training, supervision, evaluation, policy making, and program administration.
e) Program faculty: The program’s faculty must have the credentials to educate and train psychological clinical scientists–individually and collectively. Individually, faculty members should be exemplary role models for students of the kind of clinical scientists that the program envisions its students becoming. Collectively, they should provide broad representation of contemporary clinical science, should exemplify the kind of integration and collaboration that is a program goal, and should have a track record of mentoring students who have gone on to successful careers, making significant contributions to the advancement of psychological clinical science and to the application of the science to improving the human condition. The Review Committee will look for these kinds of indices of faculty quality:
i) Active research laboratories
ii) High-quality, high-impact research publications
iii) Research grant support
iv) Peer recognition, influence, and awards
v) Evidence of teaching influence, including ability to attract and retain high-quality students, positive student evaluations, membership on student committees, course syllabi, teaching awards and honors.
vi) Personal embodiment of the integration of research and application.
f) Resources and environment: Does the program have the necessary resources to achieve its mission and goals? The Review Committee will look at the following:
i) Does it have stable leadership and administration?
ii) Does it have a faculty of sufficient size, with sufficiently diverse interests and expertise, who themselves are strong models of clinical science? Does the student-faculty ratio permit high quality, intensive supervision?
iii) Does the faculty take responsibility for ensuring that the students receive high-quality integrated research and application training?
iv) Are students well supported financially and in other critical ways?
v) Does the program treat applicants, students, faculty, staff, graduates, and the public respectfully, fairly, and ethically?
vi) Do student-faculty, student-student, and faculty-faculty relationships foster an atmosphere of collaboration, intellectual stimulation, and learning that is conducive to scientific training and research productivity? What is the evidence for this?
vii) Does the program have strong support from and collaborative relationships within its department and institution? Evidence?
viii) Do the faculty and students have access to the settings, space, equipment, staff, technicians, and other support they need in order to engage in cutting-edge scientific research?
3) Quality of the science training: What is the overall scientific quality of the doctoral program’s intellectual content, pedagogical and research methods, research products, and involvement in public health applications? How well do these various aspects of the program reflect and promote high quality and significant scientific knowledge and application? The Review Committee will arrive at a qualitative judgment after examining the content and substance of the important components of the program’s education and training. Examples of relevant evidence of quality would include syllabi, faculty and student research publications, dissertations, examinations, colloquia topics, selection of applications, and the breadth and depth of basic science content and methods that students learn. See c) Research training in 2) Design, operation, and resources for more details.
4) Quality of the application training: What is the overall quality of training for clinical practice? How does the program move students through a sequence toward excellence? Are practice and research well integrated as opposed to running in parallel. For example, does application inform a student’s research and does research inform the way a student treats clients? The Review Committee will arrive at a judgement of the quality of application training by examining such aspects as: the quality of practica, both within and outside the department; whether application training is evidence-based and at a level perceived as a “best practice;” feedback given to students on practice issues; the regular evaluation of practice sites as appropriate for training; etc. See d) Clinical application training in 2) Design, operation, and resources for more details.
5) Curriculum and related program responsibilities: Does the program have a clear path for all students to acquire the necessary breadth and depth of knowledge and training experiences that would enable them to engage in high level clinical science research and application? How is this breadth documented and evaluated? Does the program ensure that all students receive training in all relevant codes of ethical behavior relevant to both research and clinical applications? How is competence in these areas documented and evaluated? Does the program ensure that all students demonstrate sensitivity to the role of contextual factors–cultural, social, biological, and other sources of variability and individual differences–in both research and clinical applications? How is competence in these areas documented and evaluated? Can the program document its commitment to diversity among faculty staff and students?
6) Quality improvement: Does the program have an enduring commitment to continuous quality improvement? Does it routinely collect data on its own performance outcomes to evaluate its goal attainment? Does it provide the faculty, students, and staff with informative, self-corrective feedback based on these data? Do the students, staff, and faculty participate actively in proposing and selecting modifications aimed at improving outcomes? Does the program provide the public–potential applicants, oversight agencies, etc.–with accurate, unbiased summaries of this information, and with descriptions of changes aimed at self-improvement? How does the program monitor its design, operations, and outcomes, and use these data to plan for the continuance of excellence in the future.
7) Outcomes: The sine qua non for a program to receive a favorable Review Committee accreditation decision is clear and compelling documentation that the program has built a solid record of successfully producing graduates who have gone on to lead productive careers, and to make high-quality contributions, as psychological clinical scientists. The evidence would include such things as graduates’ records of: research publications; grants; dissemination activities; leadership roles; awards; and contributions to translating basic science into effective applications. To document such outcomes, programs need to provide detailed records for all of their graduates over the past ten years or more. In general, the Review Committee will be asking, “What is the evidence that the majority of a program’s graduates actively have pursued careers, and engaged in professional activities, that have contributed meaningfully to the advancement and application of scientific knowledge regarding the origin, clinical assessment, diagnosis, prevention, and amelioration of mental and behavioral health problems?” This primary focus on outcome evidence is driven by the mission statement and specific goals underlying PCSAS accreditation. The program must provide admissions, retention, and other outcome data to prospective students and the public on its website, in addition to tracking the post-graduation careers and achievements of its former students.
The procedures and criteria in this section have been adopted formally by the PCSAS Board of Directors as the framework within which the Review Committee is expected to evaluate the quality of applicant programs and to determine whether they should be awarded the imprimatur of PCSAS accreditation.