On Oct 23, 2019, I walked into one of the most eye-opening meetings of my life. As a newly minted forensic fellowship program director, I entered the Association of Directors of Forensic Psychiatry Fellowships (ADFPF) annual meeting full of hope and fresh energy, looking forward to the chance of discussing lofty topics in forensic education with some of my heroes and mentors. I expected tea and crumpets. What I got instead was “fight club.” Those 90 mins of tense discussions, highlighting the fears and concerns experienced by program directors during the selection process, initiated my awakening to some hard truths. Later, as I worked through my first application cycle as a program director, I was struck by the fact that nearly all applicants confided in me their misery and stress in navigating a system that constantly placed them in binds, with few trusted neutral advisors. Perhaps it was my relative youth and inexperience that encouraged applicants to share their suffering with me, the fact that I was in their shoes not so long ago. By the time I had finished my first application cycle, I had become woke.
There is immense suffering present in the current incarnation of the forensic fellowship application process. Some suffering is unavoidable, stemming from the natural uncertainty of any admissions process, but much of it is unnecessary, avoidable, and counterproductive. The current process is not only cruel, it is senselessly cruel, harming the vast majority of programs and applicants while (arguably) benefitting only a tiny slice. We must reform the system, as soon as possible, to eliminate avoidable suffering, particularly by the most vulnerable of the participants in this process: the applicants.
The Problem
These are unhappy days in the world of forensic psychiatry fellowship programs. Here is the crux of the problem: too much product, not enough customers. Agapoff and colleagues report that for the 2016–2017 academic year, forensic psychiatry fellowships achieved a 56 percent fill rate, with 65 residents spread over 44 programs offering a total of 116 positions.1 Since then, the number of forensic programs has continued to grow, up to 48 ACGME-accredited programs offering 127 positions in 2018–2019. Seventy-three of those positions were filled, equating to a 57.5 percent fill rate.2 Things were better in the older days. According to ACGME records, in 2012–2013 there were about the same number of active residents (70) in just 36 programs.3
The implications are clear: forensic fellowship programs are increasingly desperate to recruit a small number of applicants. From the perspective of program directors such as me, the rational strategy to pursue in this situation is to identify promising applicants early and try to sign them up before anyone else can get to them. Indeed, in recent years, fierce competition has led programs to make earlier and earlier offers that are time-limited (also known as the “exploding offer”). Paranoia is high. Given the nontransparent nature of most transactions in the applications process (no one really knows what anyone else is up to), and lack of objective referees, it only takes the slightest hint of malfeasance for outrage and fear of missing out to amplify.
The overriding fear of many program directors is that they will not fill their available positions. In addition to bruised egos, being left with open positions means contracts will be left unfilled, possibly leading to cancellation and, ultimately, reduction or elimination of programs. Literally, to not fill risks death (of the program). The imperative, then, is to avoid not filling at all costs.
On the other side is a paradox. For applicants, low fill rates should translate into a buyer's market, yet because the market is unregulated, the current system inflicts much suffering on them. As one recent applicant succinctly described the process: “it's a hot mess.” Competition by programs for the limited number of applicants has led to earlier and earlier offers being made with shorter and shorter times to decide; too short to adequately assess and receive offers from other programs. Indeed, the whole point of an exploding offer, from the program's point of view, is to curtail assessment of other programs by forcing applicants to make decisions before they might otherwise be ready. In marketing parlance, the idea is to pick up a bargain by taking a good off the market before it can be fairly priced.
A theme I will be returning to is the reality of structural inequities that disadvantage applicants. Programs, although not omnipotent, hold much more power. For example, programs have much more information than applicants to make their decisions. For most applicants, this will be their first time navigating the process of evaluating forensic programs and assessing their own competitiveness compared with other applicants. Although applicants talk to other applicants, in essence the only application they have complete knowledge of is their own; they are an n of 1, making it difficult to accurately ascertain their worth to programs.
On the other hand, most programs receive multiple applications per year and have access to historical data from previous years. They have a higher n with which to compare applicants and rank competitiveness. Thus, they are in a better position to identify “bargains” (competitive applicants who may not yet know how competitive they are) and make early offers to them. Although programs can benefit from occasional bargains picked up this way, of course the problem is that other programs are doing the same. Thus, even though they are the more powerful parties in this negotiation, programs on average lose about as much as they gain from the current arrangement, while suffering from constant anxiety about other programs acquiring their desired candidates.
The dilemma for the recipient of an exploding offer is obvious. They must either accept a guaranteed offer now, or let the offer expire in the hopes that a better offer will come later, risking the chance of being forced to accept a worse offer, or none at all. Exploding offers force the applicant to make a decision with incomplete information. Some will, by luck, make “the right” decision, ending up at their best-fit program. But many will make “the wrong” decision, either committing early to a program that was not as good a fit as another program that would have made an offer later, or forgoing an early offer in the hopes of a better offer later which never arrives. In essence, the current system has made gamblers of everyone, leading to generalized anxiety and distrust. This is sheer lunacy, particularly when an alternative exists that takes chance out of the equation and guarantees optimum matches.
The applicant's dilemma reveals another structural inequity: applicants have more to lose than programs by making the wrong bet. For applicants, rolling the dice may mean that at the end of the process they may not have any offer in hand. Further, the calculation to accept a nonideal early offer is premised on applicants' assessments of their own desirability to programs and may thus disproportionately play on the fears of applicants who traditionally have felt less empowered, such as international medical graduates. On the other hand, programs usually have multiple positions to fill and can make multiple offers simultaneously; not filling a position is not ideal, but it usually does not mean that the program will not survive in its entirety.
There is one small group of matchings who benefit from the current system: programs and applicants that know early on that they like each other. Usually this involves highly competitive applicants who are ready to decide early, inform the program that they wish to attend, and consequently receive an early offer. Or, this may apply to internal candidates who have decided early they would like to continue to train at their home institution for fellowship. In these particular scenarios, both applicants and programs benefit from having things settled ahead of time. The solution I ultimately propose preserves this benefit for this small subset.
There are steep costs to both sides in pushing application dates earlier and earlier. Some may be surprised to hear that some of the applicants that suffer the most in our current system are highly competitive applicants, the ones who do not know at the start of the process where they would like to end up. They are typically pressured by multiple early offers, each pressing applicants to make decisions before they may be ready. One may be tempted to minimize their distress (at having too many offers), but the problem is not that they have multiple offers; the problem is that they are pressured to accept offers before they are ready to decide.
The problem stems from the fact that as a group, we program directors are powerful but needy (due to low fill rates). We are in some ways godlike, but more akin to fallible Greek gods, not equanimous Buddhas. When we inflict our anxieties onto applicants, those anxieties are amplified by the power differential and perceived as pressure. For the applicant, what might otherwise be a pleasant affirmation of their desirability turns into a complex juggling act to appease multiple needy partners, each asking, “Do you like me? How much do you like me? Are you ready to decide yet? How much time do you need? How about now?”
Applicants who do not decide until later in their training to pursue forensics face further pressures. To be competitive, current application date norms pressure candidates to apply to programs in the middle of their postgraduate year 3 (PGY-3), before many have had a chance to be exposed to forensic psychiatry. Granted, given low fill rates, such later-deciding applicants will likely eventually find a program with open positions, but their choices will be more limited.
On the other hand, because applicants are earlier in their career when they apply, programs have less information with which to assess applicants before extending offers. As a case in point, the Rappeport fellowship is a prestigious honor bestowed by AAPL during the annual meetings in October, one that used to have some meaning for forensic program selection committees. Residents are typically nominated by March of their PGY-3 year and receive the honor as a PGY-4 in October, which is long after offers are typically made, so irrelevant to the applications process. I have heard from ADFPF “old timers” that in the past, AAPL meetings used to play a vital role in the applications process, offering a chance for applicants and programs to meet in person and even interview for positions, since admissions decisions typically occurred after October.
The Common Application Process
To its credit, at that October 2019 ADFPF meeting I attended, a majority of program directors approved some preliminary steps to improve the situation. A common application process was piloted, standardizing the minimum components to apply to individual programs (participating programs are then free to build on those minimums with program-specific requirements). In addition, most programs agreed to adhere to standardized timelines, in an attempt to prevent earlier and earlier offers. These timelines were posted on AAPL's website in its directory of fellowships4 (see Table 1).
On its face, this was good progress. Barring interviews from occurring before April 1 of the year prior to the start of fellowship provided a clear limit on how early applicants could be considered. Marking June 1 as the earliest date that “applicants can be asked to accept or decline offers” provided (in theory) a safe interval of time in which applicants could evaluate programs without the pressure of having to make a hasty decision.
There were several problems, however, which limited the benefit of this noble attempt:
Not all programs agreed to adhere to the stated timelines;
Exceptions were explicitly permitted for internal candidates;
Although programs were encouraged to give applicants until June 1 to make their decisions, under the agreement applicants were allowed to commit earlier as long as they were not pressured to make their decision before June 1;
No neutral party (referee) was implemented to monitor or investigate potential violations; and
No enforcement mechanism was implemented to deal with violations, making this a purely voluntary honor system.
First, let us address the internal applicants exception. It is problematic and should be replaced with a more intelligent approach. In the process of researching this article, I had the opportunity to talk with recent applicants about their experiences. Somewhat surprisingly, some of the most memorable stories of distress came from internal applicants who felt pressured by their programs to accept early offers and discouraged from evaluating external programs. The pressure was usually nuanced, along the lines of “Why wouldn't you want to continue to develop your career with us?” or asking for subtle displays of loyalty (“Are you a team player?”).
Perception can differ from intention. The intention of program directors might have been to simply communicate to internal candidates how great they thought they were and expressing their welcome for candidates to stay. The perception among many internal candidates, however, was a feeling of distress in having to navigate pleasing their home program enough to keep their options open without insulting anyone, while also evaluating other programs with a guilty conscience, all the while wondering if their home institution might rescind their offer if they wait too long to decide.
Again, we see the power differential at play, the asymmetry that makes interactions fraught. The sorts of language program directors consider “pressure” might be very different from what applicants experience. Because of this, program directors must be very mindful and intentional in their use of language with applicants, given the power they occupy in the relationship.
Clear Communications Guidelines
The solution to this is obvious: provide clear communications guidelines and restrict certain kinds of questions from being asked that are known to be problematic. Residency and fellowship applications have been going on for a very long time. At this point, we have a wealth of experience regarding certain kinds of questions that are routinely asked but serve no benefit to the applicant and cause distress. Many of these questions are of the “assortative mating” variety, which has many variations, but all boil down to either “How much do you like me?” or “Who else am I competing with?”
It is clear why programs ask these sorts of questions. As the system is currently set up, offers are made personally, and program directors do not want to waste time making offers to applicants who are not interested. They also want to know who else might be competing for their applicants to calibrate how soon or aggressively to start making offers. Naturally, directors do not want to keep a spot open too long waiting for a reply, to avoid losing out on other candidates in the meantime who are being courted by other programs. Thus, they are inclined to make early offers and to persuade offer recipients to make quick decisions. In the absence of clear guidelines differentiating “due” from “undue” persuasion, the situation is indeed “a hot mess.”
The National Resident Matching Program (NRMP), a private nonprofit corporation that has run “the Match” for U.S. medical residencies since 1952 (and currently matches fellows to programs in 65 medical subspecialties)5, has at this point almost 60 years of practical experience in identifying lines of questioning that are unhelpful and cause distress in applicants. Whether or not forensic fellowships decide to implement an algorithmic match through NRMP, we should carefully consider adopting the communications guidelines they have developed, particularly regarding restrictions on persuasion,6 part of which is excerpted below.
6.2 Restrictions on Persuasion
One of the purposes of the Specialties Matching Service is to allow both applicants and programs to make selection decisions on a uniform schedule and without coercion or undue or unwarranted pressure. All participants in the Match shall respect the right of applicants to freely investigate program options prior to submission of a final rank order list. Both applicants and programs may express their interest in each other; however, they shall not solicit verbal or written statements implying a commitment. Applicants shall at all times be free to keep confidential all information pertaining to interviews, their ranking preferences, and the names or identities of programs to which they have or may apply. (Ref. 6, Section 6.2)
Helpfully, the NRMP goes on to explicitly list certain kinds of identifying information that programs are prohibited from soliciting from applicants:
It is a breach of this Agreement for … a program to request applicants to reveal the names, specialties, geographic locations, or other identifying information about programs to which they have or may apply … (Ref. 6, Section 6.2)
The Advent of the Ghost Offer
By specifying the earliest dates that interviews could be held (April 1) and offers accepted or rejected (June 1), the rules of the “common application process” were designed to create a two-month period of time during which applicants could evaluate programs without the pressure of exploding offers. Although adherence to the process was voluntary, and no explicit enforcement mechanism was implemented, I have little doubt that the vast majority of program directors followed the letter of the law and did not make explicit exploding offers. As a group of highly qualified forensic experts, forensic program directors are less likely to engage in explicit violations of rules, however nonbinding.
Still, the loophole that applicants could accept offers earlier than June 1, as long as they were not “pressured” to make their decisions, has turned out to be problematic in practice. In lieu of the exploding offer, the ghost offer has come into being. Rather than the explicit deadlines of exploding offers (“you have a week to decide or the offer will be withdrawn”), ghost offers are implied to be available but can disappear at any time. There are many variations with varying levels of subtlety, but all rest on the proposition that “a spot may be available to you now” but “might be gone if you wait too long.”
Although some program directors are rigorously following the spirit of the agreement by making it clear that any offers made before June will still be available come June 1, it is clear that something has gone awry. Between April and June 2020, I received many frantic emails and phone calls from applicants who perceived they were made an (at least tentative) offer and felt pressured to decide quickly. We are essentially right back where we started from, if not worse. In fact, ghost offers are arguably more stressful to applicants than exploding offers, which at least are concrete and specify a deadline. Ghost offers, on the other hand, are more like faint apparitions that can disappear at any time.
The failure of the current system is not about program directors being bad people. It is about the fragile nature of voluntary agreements during difficult times. The math is simple. If each program director has a 95 percent chance of behaving ethically over the course of the applications cycle, and there are 48 programs, there is only a .95 to the forty-eighth power probability (=8.5% chance) that all 48 directors will behave ethically in any given year. A single program director acting less-than-fully ethically is enough to kickstart a paranoid feedback loop that devolves into chaos: “If program X isn't playing by the rules, I don't see why I need to keep playing by the rules, especially if it's going to hurt me.”
But note that system failure does not even require any actual unethical behavior; all that is required is the perception that others are behaving unethically, a perception that is encouraged to flourish in the context of desperation and lack of transparency. If people perceive that others are not playing fairly, then the impetus for any individual to behave altruistically is diminished, and we descend into a version of “the tragedy of the commons,” the classic game-theory scenario in which selfish behavior by individuals in a group leads to the group's eventual demise.7
No Exceptions
There is a simple solution: tighten the rules so that applicants cannot accept offers before June 1, no exceptions. This would create a truly safe time period during which applicants may receive praise, adulation, and welcoming winks and nods from interested programs but are freed from pressure to reply, either affirmatively or negatively. A small number of applicants may resent such “protection” because it would limit their ability to accept an offer early (they would have to wait until June 1 to do so), but their needs can be met with a formal early admissions process, described below.
In addition, to further minimize undue pressure during the safe time period, we should consider adopting restrictions on postinterview communications, as specified in the NMRP Match Communication Code of Conduct8:
Discouraging unnecessary post-interview communication. Program directors shall not solicit or require post-interview communication from applicants, nor shall program directors engage in post-interview communication that is disingenuous for the purpose of influencing applicants' ranking preferences.
Some subspecialties, such as orthopedic fellowship programs, have adopted a stricter rule, banning all postinterview communications,9 based on research reporting high rates of applicants' distress over having to navigate such communications.10 Some have proposed a middle approach that would allow certain forms of appropriate postinterview communication, limited to objective questions about training programs, with questions being answered by a designated point person in the program.11 As a field, we must start engaging in discussions regarding what we wish our communications rules to be.
Unraveling Transaction Times
Making a clear rule that offers cannot be accepted before June 1 would eliminate the problems associated with early offers, but predictably creates another problem, which economists12 have called “unraveling of transaction times.” Although the current system allowing offers before June 1 causes distress, one virtue is that offers and acceptances are spread out over time. Specifying a specific start date for accepting offers is likely to cause a rush of negotiations starting at that date.
Picture the scenario: program directors, in their desire to woo the most desired applicants, will be prepared to make offers for their positions as soon as possible. The most competitive applicants will be inundated with offers at the very start. Because program directors cannot ethically make more offers than there are available positions, they will need to wait to hear of rejections before making further offers; thus, their incentive will be to provide offer recipients as little time as possible to make their decisions, preferably immediately. On the other hand, recipients of offers, if not offered their top desired program right away, will want to delay rejecting offers in hand in the hopes of receiving a more desirable offer later.
Specifying a specific start date for accepting offers does create a safe time period for applicants to explore programs without the pressure of having to decide too soon. It does not, however, solve the problems engendered by time-limited offers; it merely compresses the making and accepting of such offers to a frenzied time period after the start date. There is a better way, proven in concept and refined by decades of experience. In fact, the situation we are facing as a psychiatric subspeciality is identical to the conditions which led to the creation of the NRMP in 1952.
A Brief History Lesson of the NRMP Match
At this point, a brief history lesson may be useful. To read early accounts of the situation faced by residency directors for the market of medical interns in the early 20th century is to be struck with an eerie sense of déjà vu. Due in part to rapid growth of hospitals, by 1951 the number of internship positions (10,000) far outstripped the number of applicants (6,000).13 Faced with an increasingly competitive market for interns, hospitals began to make offers to students as early as the second year of medical school, to force applicants to make decisions before hearing from other hospitals. As lamented by the director of Mount Sinai Hospital around the time in a paper titled: “Intern selection: wanted, an orderly plan”:
Twenty-five and more years ago, the selection of internes by most hospitals took place in the last half and even the last quarter of the senior year. That selection has now been advanced on the school calendar to the beginning of the junior year and, indeed, inquiries now come to me even from sophomores. The dates of examinations and selection have been pushed farther and farther back, through the efforts of some hospitals to get ahead of others in the choice of candidates, for hospitals can exercise pressure on the selected candidates by requiring acceptance of offers of internship at once or within a short time. The student's dilemma is understandable; if the first offer of this kind comes from a hospital of his second or third choice, he loses out entirely if he declines and is not selected later by the hospital of his first choice. (Ref. 14, p 27)
Efforts by Dr. Turner and others to reform the system persuaded the Association of American Medical Colleges (AAMC) to impose some order on the timelines for hiring interns. At their annual meeting in 1945, the AAMC agreed to adopt “the cooperative plan,” which prohibited the release of students' academic records until the end of the third year of medical school.15 This served to establish a firm boundary for the earliest date that internship applicants could be considered. In addition, the AAMC “requested” to hospitals that applicants be given a 10-day interval to consider any offers made.15
The plan worked at first but rapidly unraveled. Without access to the students' academic records, hospitals were indeed largely forced to delay consideration of internship applications (and subsequent offers) until the applicant's fourth year of medical school. On the other hand, the “request” by AAMC to give applicants a 10-day window to consider offers rapidly deteriorated in the context of competition and lack of countervailing regulatory pressure:
In 1945, offers were to remain open for 10 days. By 1949, a deadline of 12 hours was rejected as too long. Hospitals were finding that if an offer was rejected after even a brief period of consideration, it was often too late to reach their next most preferred candidates before they had accepted other offers. Hospitals thus often pressured students to reply immediately; offers conveyed by telegram were often followed by telephone calls requesting an immediate reply (Ref. 12, p 910).
Over a span of four years, the time window given to applicants to consider offers compressed from ten days to none.
F.J. Mullins, Dean of Students at the University of Chicago School of Medicine summarized the problems he saw in the years after “the cooperative plan” was implemented:
… on average the students in their graduating classes receive 3 or even 4 offers per student for internships from the hospitals to which they apply. This means that the hospital must wait until these students make a choice, and, in turn, must keep other students in suspense as to whether a place will be available for them … Many hospitals have resorted to phoning the students directly and putting pressure on students to make immediate decisions over the phone … Students sometimes get panicky and accept poor internships way down on their lists because they have not heard from a higher position on their order of preference … Students have resented pressure for immediate decisions put on them by phone communication from hospitals. Some hospitals have felt that other hospitals have violated the principles of the Cooperative Plan and have notified students early or have put undue pressure on students for immediate decisions (Ref. 16, p 438).
The parallels to our current situation are too obvious to belabor. The adoption of standardized timelines did succeed in curtailing the creep of earlier and earlier offers, but did nothing to alleviate the pressure felt by recipients once such offers were made. Exploding offers still prevailed, but the bombs came with shorter and shorter fuses.
“The Match”
“The matching plan” was proposed by Dean Mullins in 1951 to remedy the deteriorating situation.13 Instead of a drawn-out “analog” matching process characterized by making and accepting offers one at a time, an algorithm would be introduced that could match all applicants and all programs in an automated and mathematically optimized manner. As described by Dean Mullins above, an analog matching process led to suboptimal outcomes for applicants because offers arrived in a piecemeal fashion, each demanding hasty decisions to be made before all offers could be considered. Hospitals also suffered because in that system they often missed hiring suitable candidates while awaiting to hear rejections from their offered candidates.
Mullin proposed replacing this painful, inefficient process with a “central clearing agency” that would act as a “mechanical facilitator” using a matching algorithm. Applicants and hospitals were free to evaluate each other, but offers would not be made or accepted during the evaluation period. At a specified date, each would submit a confidential rank ordered list containing all their preferences, and an algorithm would match them in a mathematically optimized fashion based on these lists. Mullin and Stalnaker summarized the anticipated benefits:
It benefits applicants and hospitals by giving full recognition both to the student preference and to the hospital's evaluation of its applicants. It prevents unfair pressure forcing students into early commitments, often to their detriment. Under the plan the student will not be required to make a decision on the basis of a telephone call or within a very limited period of time. A last minute scramble, with its many uncertainties, is eliminated. No student, under the plan, will receive telegraphic offers by a number of hospitals and wonder if he will receive other offers later. Hospitals will not send out telegraphic offers to many students only to receive no replies or negative ones, thus requiring them to send out additional offers at a later time to students who may, in the meantime, have taken another internship although they preferred the hospital involved (Ref. 13, p 341).
Based on the efforts of Dean Mullin and colleagues, the “National Interassociation Committee on Internships” was created with representatives from medical student associations, the AAMC, and major hospital associations. The committee approved a plan to implement the proposed algorithmic match for internships for the 1951–1952 year13 via the creation of a centralized clearing house which eventually became known as the NRMP. The plan enjoyed very high rates of adoption from the very start (over 98% of hospitals and 97% of eligible students participated in the first year) and was deemed a success.17 The NRMP continues to be used to the current day for the main residency match, encompassing over 44,000 applicants for 37,000 positions, and has expanded to be used by more than 65 subspecialities,5 with overall high rates of satisfaction.9,17,18
The details of the match algorithm used by the NRMP are extensively described elsewhere19 and are outside the scope of this article. It is hard not to admire the algorithm after getting to know it. The current NRMP algorithm is based on Gale and Shapley's work on the theoretical underpinnings of “two-sided matching markets,” of which applicant-program matching is an example. Their work led to the development of the Gale-Shapley algorithm20 which was proved to generate optimal solutions called “stable matchings.” This is a key feature that is the basis for the NRMP Match's success and longevity.
The Beauty of Stable Matching
Two-sided matching markets are systems in which members from two sides try to find each other through a selection process (“match”). Some examples are college admissions (colleges and students), residency selection (programs with residents), business hiring (employers and workers), and marriage.
Algorithms that generate stable matches, such as the Gale-Shapley algorithm, generate matches that are demonstrably better than any other set of matches, taking into account both sides' ranking preferences equally. Matching algorithms that generate unstable matches are unlikely to survive, because it means some match pairs could have done better if they found each other outside the program. Indeed, matching algorithms have been applied in multiple markets (U.S. residencies, British residencies, sororities, etc.) and found to survive only if stable matchings were produced.21
The beauty of the NRMP Match algorithm is that it works best when everyone behaves selfishly. All applicants have to do is submit a ranked list of their true preferences for programs; all programs have to do is submit a ranked list of their true preferences for applicants. Strategically, there is no incentive to do anything else. There is no need for applicants to guess what the programs think of them. There is no need for programs to figure out how highly applicants rank them. Total honesty becomes the best strategy for all involved.
This means that as program directors, we can focus on ascertaining how much we like the applicant rather than how much they like us, and vice-versa. Program directors will still want to showcase their strengths to attract applicants, but we will no longer have any reason to ask questions aimed at sizing up the competition to know how soon we should be signaling offers. We will have no reason to desire displays of loyalty; such displays mean little if applicants proceed with their optimum strategy (to honestly list their preferences) and let the algorithm handle the rest.
The NRMP match has its problems, but its problems are well known, and tractable. One well-known problematic area involves programs trying to coerce applicants into increasing their order on applicants' lists.10 Obviously, there are acceptable ways for programs to influence how highly applicants rank them, such as putting their best foot forward and effectively communicating to applicants the worth of the program to them. Unacceptable ways involve playing on fears of uncertainty (“you'll match here if you rank us highly”), or by psychologically hooking applicants by inviting displays of loyalty (“show us how much you like us”). Although these are violations of Match communications guidelines and should not be allowed, the real solution lies in educating applicants to ignore such attempts at manipulation, as their best strategy is always to submit their true preferences in rank order.
The Way Forward
Our current system inflicts unnecessary suffering and generates poor outcomes due to bad design. We must establish rules and procedures that maximize the common good. Those who say they are “against the match” should be aware that we currently do a match: one that is carried out person-to-person, over time, without adequate guidance on timelines and communications, and without monitoring or enforcement procedures.
The NRMP match is not just an algorithm, it is a collection of interlocking parts that collectively form a matching system. If we choose not to enlist the full services of the NRMP match, then we should be prepared to develop an alternative system. These are the components we must consider:
Timelines related to the earliest dates for applications, interviews, and offer acceptances;
The way that offers should be made;
The establishment of communications guidelines;
The identification of a process to monitor whether rules are being followed; and
The appropriate punishment for rule-breakers.
We should consider carefully what we wish our timelines to be. Later dates would allow more residents to be exposed to forensic training by the time they would need to apply and would give program directors more information to evaluate applicants. In addition to considering the opinions and needs of programs, we should also survey potential applicants and recent fellows regarding which timelines they would prefer.
Once clear guidance for dates is established, we must next decide how offers are made. In the current “analog” process, offers are made person to person, one match-pair at a time, and with variable amounts of time given to the recipient to decide. We know where this unhappy road leads. Alternatively, we have the NRMP Match algorithm, originally based on the Gale-Shapley algorithm, then adapted to the requirements of the residency match by the economist Alvin Roth, who shared the 2012 Nobel Prize in Economics with Shapley for this work.
There is no question as to which match mechanism (individual deal-making versus algorithm) is superior. We should implement an algorithmic match mechanism in our system; its great alchemy is that it transforms individually selfish behaviors into the best path for the group as a whole. We could hire the NRMP to do this for us, as it does for 65 other subspecialties. Doing so would also bind us to the NRMP's code of conduct for communications and their policies for monitoring, investigating, and enforcing possible violations, which have real teeth; the most serious violators can be permanently banned from the Match. Exceptions procedures have also been developed to consider requests for waivers and to dispute findings of the review panel.
On the other extreme, some brave subspecialties, such as urology, have hired their own computer experts to administer all components of their own match.22 Obviously, this option is not for the faint of heart. As a middle option, less comprehensive alternatives to the NRMP match exist, such as SF Match. The SF match initially began as an in-house service developed in 1977 by the American Academy of Ophthalmology to run their residency match9 and has since expanded to provide matching services to other residencies (e.g., plastic surgery) and 22 medical fellowships.23 Unlike the NRMP, SF Match administers the match algorithm but leaves communications guidelines and monitoring and enforcement procedures in the hands of the programs.
SF match also integrates, as part of its service, an online centralized application service, similar to the Electronic Residency Application Service, that enables fellowship applicants to upload their academic record once to an online service, which then handles distributing materials to programs. The current common application process is a good start, but centralizing all aspects of an applicant's file (application, transcripts, boards scores, letters of recommendation, etc.) electronically would save considerable time and money for applicants, and ease the burdens of fellowship administrators.
If we do not hire the services of the NRMP, we will need to consider and establish our own communications guidelines. We should examine the NRMP's Code of Professionalism, Code of Conduct for Communications, and Restrictions on Persuasion as a starting point for our discussions. Other fellowship programs have recently gone through transitioning to a formal match process, and we can learn from them.9 Guidelines should be as clear as possible, as this will help their promulgation and adherence.
In this scenario, we would also have to handle enforcement and punishment ourselves. Although there may be some disquiet in subjugating ourselves to the enforcement policies of an outside agency, the fact is, the NRMP has been doing this a long time and has well-developed policies and procedures. The alternative is to find people in our society who are willing to do these thankless tasks on an ongoing basis, probably as volunteers. It is unlikely that we could do a better job, or would want to try.
An Early Decisions Process
As I have discussed, implementing a match process with clearly demarcated time points and no early exceptions may disadvantage a small group of people: those applicant–program pairs that “know early” that they would like to be together. I propose that a formal NRMP-like match process be combined with a formal early admissions process modeled on the “early decision” program used by elite colleges.24 In this process, applicants who decide early in their search on a definite favorite choice can apply to one college by a predetermined “early decision application” date, with the understanding that offers are binding. Similarly, forensic applicants who have decided early on a favorite program could be allowed to apply to one program, and if accepted they would be removed from a subsequent match. Programs would have to decide on an upper limit to the number of positions filled through an early decision process, to ensure that enough spaces remain to incentivize applicants to participate in a main match later in the year.
Unintended Consequences
One of the concerns I have heard from program directors regarding “the Match” is that it may lead to a rise in number of applications per applicant. To maximize the chances of matching, applicants will want to evaluate many programs to have enough desirable programs to rank in their lists. Indeed, it is true that some programs have reported rises in number of applications per applicant after adopting a formal match process.9,18
Although I can empathize that programs that receive more applications will need to gather more resources to deal with the increase, it seems hard to argue that having more applicants looking at your program is a bad thing overall. In any case, it may be a moot point; from my conversations with ADFPF members, application numbers appear to be going up anyway. Part of the reason may be that adoption of the “common application process” has made it simpler for applicants to apply to multiple programs, but I think the main reason is that the pandemic has forced programs to rely more on video interviews. Applicants are no longer faced with the disincentive of expensive and time-consuming travel and are free to explore geographically. Although the pandemic appears to be nearing the end, my program is not considering discontinuing the valuable option of video interviews anytime soon.
Growing the Pool of Applicants
The hard truth is that, in the short term, there are far more forensic fellowship positions than there are applicants. Programs that do not reliably fill may have to reduce capacity or close. This is the case whether we continue with the current process or transition to a formal match; neither process changes the basic math. In the longer term, the winning path is to increase the number of applicants in the pool. Forensics programs have two great virtues going for them, which should help increase numbers of applicants over time: the number of psychiatry residents is growing robustly,25 and forensic psychiatry naturally fascinates the public imagination.
Given increasing numbers of psychiatry residents, we should be able to increase the numbers applying to forensic fellowship programs. Simplifying the application process, adopting a centralized application service, and putting into place clear guidelines that minimize distress cannot but help increase the number of applicants.26,27 In addition, programs continuing to offer video interviews would help level the playing field economically for applicants and allow for more diversity of exploration, ultimately to everyone's benefit.
One reason why psychiatry fellowship numbers are declining despite increasing numbers of psychiatry residents may be economic hardship: given levels of indebtedness of residents, adding an extra year of training is a significant financial hardship.1 I believe the answer here is to join with other psychiatric subspecialties to call for changes in psychiatry residency to allow for an early start. The ACGME allows for the requirements of general psychiatry to be completed in 3 years.28 Just as child psychiatry has secured a process for residents to start training for their fellowship in the PGY-4 year, forensic psychiatry should do the same. Although this issue has been considered by AAPL in 2016 and rejected,2 I believe the time has come to re-evaluate this decision. The need for more trained forensic psychiatrists has not abated, and fill rates continue to be problematic.
Faculty in training programs should be proactive in reaching out to their residency program and medical school to offer topics of forensic interest that inspire natural curiosity: insanity, drug addiction, malpractice, psychopathy, neurolaw, memory, bias, and biological bases for behavior. Faculty at institutions associated with undergraduate universities might want to consider designing and teaching an undergraduate class. Those interested in public speaking should reach out to local high schools and science museums to offer talks. I have participated in all these activities and have been gratified to find my interests reciprocated with very receptive crowds. Over time, these activities generate public interest, which is an important contributor to the applicant pool.
AAPL as an organization should be doing more to increase its appeal to potential applicants. Pushing application timelines after AAPL's meeting in October would make the annual meeting more relevant for residents to attend. With the advent of new video offerings such as AAPL's virtual “Ask the Experts” talks and online courses and panels, we have ready-to-go offerings that can attract and stoke interest in our field. We should consider making AAPL memberships free for all trainees, or at least offer online educational trainings for free in exchange for a registration which captures their contact information. This will help AAPL create a database of trainees interested in forensics, who can then receive targeted communications.
Applications Process Oversight Committee
Clearly, fellowship directors will want to have a voice in deciding what application process will be used, but I believe strongly that we should not be the only voice at the table. This is based on my general skepticism regarding the ability of any organization to self-regulate. Instead, we must have a broad range of voices and perspectives. We can look to the development of the NRMP match as a guide, which was developed between 1950–-1952 by a coalition of students, hospitals, and medical schools called the “National Interassociation Committee on Internships.”17
I propose that AAPL form an analogous committee called the Applications Process Oversight Committee, containing representatives from the ADFPF, AAPL leadership, and, crucially, residents and fellows. Forensic program directors and trainees are the main participants in the match so should have a direct say in what happens to them. The role of representation by AAPL leadership would be to act as neutral sources of wisdom to guide the process.
The involvement of trainees in the decision-making process is critical, and their voices have been missing from discussions which have up to this point only involved ADFPF members. Trainees should take heart from the fact that student activism shaped the NRMP match from the very start. Critical refinements to the match algorithm over time have been driven by sharp-eyed medical students who detected biases that favored programs over applicants in rare cases and proposed algorithmic remedies.12 With the most current refinement of the algorithm to a so-called “applicant-proposing” mechanism,29 the match optimizes pairing for both programs and applicants, with ties broken in favor of applicants. This would not have happened had students remained silent.
Finally, the Applications Process Oversight Committee should work quickly so that changes can be implemented by the next applicant cycle, which by default would begin around January 2022 for fellows starting July 2023. Otherwise, our nonaction will perpetuate a system that inflicts needless suffering on all involved, and for no good reason.
Footnotes
Disclosures of financial or other potential conflicts of interest: None.
- © 2021 American Academy of Psychiatry and the Law