ChatGPT and other generative AI are being widely used in higher education in school admissions. But now, students who’ve turned to AI for writing are turning back to people to make that work sound more human—and schools can’t possibly keep up.
“Tapestry.” “Beacon.” “Comprehensive curriculum.” “Esteemed faculty.” “Vibrant academic community.”
They’re among the laundry list of colorful words, flowery phrases and stale syntax that are likely to tip off admissions committees to applicants who’ve used AI to help write their college or graduate school essays this year, according to essay consultants who students are hiring en masse to un-ChatGPT, and add a “human touch” to, their submissions.
“Tapestry” in particular is a major red flag in this year’s pool, several essay consultants on the platform Fiverr told Forbes. Mike, an Ivy League alum and former editor-in-chief of the Cornell Business Journal who now edits hundreds of grad school applications each cycle through Capitol Editors, said it’s appeared repeatedly in drafts from at least 20 of his clients in recent months. (He requested anonymity to protect their privacy.)
“I no longer believe there’s a way to innocently use the word ‘tapestry’ in an essay; if the word ‘tapestry’ appears, it was generated by ChatGPT,” he told Forbes. Though many such words, on their own, could have come from a human, when a trained eye sees them used over and over again in the same cadence across multiple essays, “it’s just a real telltale sign.”
“There will be a reckoning,” Mike added. “There are going to be a ton of students who unwittingly use the word ‘tapestry’ or other words in their essay that [may] not be admitted this cycle.”
This winter and spring mark the first full admissions season since the explosion and broad adoption of ChatGPT and other generative AI, and in the education world, the fast-growing use cases for the technology have far outpaced school policies. Some educators are trying to fend off the tech with “zero tolerance” policies, while others are actively embracing it in their classrooms (Arizona State University, for example, is partnering with OpenAI to use ChatGPT Enterprise for tutoring and coursework). But short of cohesive, consistent rules for how AI can be used in the application process, if at all—and without tools that can reliably detect whether it has been—many students have turned to OpenAI’s ChatGPT and its rivals for help. The uptake has given rise to a cottage industry of freelance consultants who specialize in plucking out suspicious AI jargon and making essays sound authentic.
Forbes asked more than 20 public and private colleges of varied sizes across the United States, from Arizona State and Georgia Tech to Princeton and Harvard, about trends in the use of AI or ChatGPT among applications and how they are handling candidates likely to have relied on it. Many schools declined to comment or did not respond; Emory spokesperson Sylvia Carson said “it’s too soon for our admissions folks to offer any clear observations.”
“I no longer believe there’s a way to innocently use the word ‘tapestry’ in an essay; if the word ‘tapestry’ appears, it was generated by ChatGPT.”Mike, essay consultant and former editor-in-chief of Cornell Business Journal
But editing consultants told Forbes that if they were able to spot suspected AI use having read just dozens or hundreds of essays, admissions committees reviewing many multiples more could have an even easier time picking up on these patterns. Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications.
“When you’ve read thousands of essays over the years, AI-influenced essays stick out,” Toll told Forbes. “They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.”
In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it.
The words were “thin, hollow, and flat,” he said. “While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.” (Disclosure: One of the article’s authors, Alexandra, has occasionally shared feedback with college applicants on their essays.)
A demand for “human vibes”
In December 2023, an unusual request popped up on Indiana Pejic’s Fiverr account: “Could you edit and proofread my personal statement, 1,000 words? I partly wrote it with ChatGPT and would appreciate it if you could make it sound less robotic.” The request was from a Korean student applying for a PhD in Renaissance Literature at the University of Cambridge.
Pejic—a freelancer in Serbia who has three master’s degrees and one PhD—has edited roughly 100 personal statements and college essays through Fiverr. Though she initially feared her business would take a hit with the onset of ChatGPT, the reality has been quite the opposite: the AI boom has fueled unprecedented demand for editors adept at making computer-generated writing sound like the real thing. “ChatGPT is extremely wordy,” she said, and “there are many abstract words that don’t really connect well.” Today, she charges up to $100 to add “human vibes” to AI-written essays. (Others offering similar services through Fiverr charge anywhere from $10 to $150 depending on the length.)
Lara Cantos, an editor and translator in Mexico, said she has seen so many repetitive groups of words and sentence constructions that she’s made a glossary of terms that set off AI alarms. “It’s specific terminology and sentence structure that starts repeating itself, and the use of the same adjectives again and again,” she said. (“Tapestry,” she noted, also makes recurring appearances.)
“I partly wrote it with ChatGPT and would appreciate it if you could make it sound less robotic.”A student applying for a PhD at the University of Cambridge
Mike, from Capitol Editors, says he, too, has kept a running list. It includes describing something that “aligns seamlessly with my aspirations” or “stems from a deep-seated passion,” or references one’s “leadership prowess,” “entrepreneurial/educational journey” and “commitment to continuous improvement and innovation.” (One structure that’s a giveaway, he added, is “not merely X, but Y,” or “not just X, but Y.”)
All the consultants Forbes spoke to said that AI tools are fairly popular among international students applying for admission abroad who turn to the likes of ChatGPT because English isn’t their first language. In some cases, they’ll feed the AI ideas or prompts in their native speech to get started with a first draft; in others, they’ll rely more heavily on the tech. “I just received an order from a student in Switzerland applying for the U.S. with a ChatGPT-generated essay,” Cantos told Forbes. “Some of the AI-written content sounds like literal translations.”
A Higher-ed Wild West
While AI tools that can write and refine large blocks of text continue to develop at a breakneck speed—becoming both better at writing and harder to detect—schools are struggling to keep up and reach consensus on how to address them in applications. And students, in the meantime, are both seeking out clearer policies (and pushing the limits) on what they can and cannot do with these tools.
“I don’t think there is anything even approaching a consistent approach to the problem,” said Chris Reed, executive director of admissions at Texas A&M.
Each admissions office has different rules on AI use in applications—ranging from the University of Southern California’s stringent “zero-tolerance” policy, to Georgia Tech’s position that students are allowed to use AI as a “helpful collaborator,” to Texas A&M’s stance that applicants can submit AI-generated content because the university has moved away from using the standard college essay as a writing sample. (In fact, one of the prompts for Texas A&M’s honors program is to use an AI platform like ChatGPT or Google’s Bard to write an essay, and then critique how well it was able to present an argument.)
“So much of it just depends on the individual institution and how they’re using essays in their process,” said Reed from Texas A&M, noting that some 30% of the admissions essays they now receive use AI in some capacity according to AI detection tool GPTZero.
Other universities are less flexible. At the University of California, which has 10 campuses across the state, prospective students may use AI for brainstorming and creating an outline but the final draft must be “independently written”—and answers that include an “unedited AI response” (such as a direct copy-paste from ChatGPT) could be flagged as plagiarism, spokesperson Rachel Zaentz told Forbes in an email. “It would be more work for [applicants] to try building a strong ChatGPT prompt than it would be to develop their own original responses to the questions,” she added.
The Common Application, used by more than a million students each year to apply to over 1,000 academic institutions, also considers “the substantive content or output of an artificial intelligence platform” a form of plagiarism under its fraud policy. Common App president Jenny Rickard told Forbes “we investigate all fraud allegations, and if they are substantiated, we take appropriate disciplinary steps.” She did not say how, exactly, possible instances of AI-related plagiarism are investigated, though she noted the Common App does not use AI detection tools. The consequences, however, can be severe, and the disciplinary actions include reporting the findings to all member colleges on a student’s “My Colleges” list.
Detection, an imperfect science
It’s no wonder Common App isn’t relying on tools that promise to weed out AI-generated content: it’s still notoriously inaccurate, so much so that OpenAI shuttered its own AI detection software. Leaders in the space like GPTZero have changed course, now offering schools high-level analysis of how AI is being used in applications rather than pinpointing problematic essays. Other creators of detection software have warned against the use of the technology in admissions altogether.
Disciplinary actions for using AI on The Common Application include reporting the findings to all member colleges on a student’s “My Colleges” list.Jenny Rickard, CEO and President of Common App
Eric Mitchell, a PhD student at Stanford who built DetectGPT, called it “a total minefield,” cautioning that while research into detection software is worthwhile, the resulting tools are simply not ready for primetime in higher ed. College admissions is such a critical moment in a person’s life, he said in an interview, and “the cost of falsely condemning someone’s essay is just so high, [that] I’m not sure we can accept a 2% false positive rate.” It’s unclear how widely schools are using this software in admissions, but both Mitchell and GPTZero’s CEO Edward Tian said educators have expressed interest.
Like a majority of the schools Forbes contacted for this story, GWU’s dean of undergrad admissions did not say whether AI detection tools are part of its review process. But he did make a case for aspiring attendees to avoid ChatGPT and its peers altogether.
“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in,” Toll said. “But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”