Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.
According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”
von Ahn says that “Duolingo will remain a company that cares deeply about its employees” and that “this isn’t about replacing Duos with AI.” Instead, he says that the changes are “about removing bottlenecks” so that employees can “focus on creative work and real problems, not repetitive tasks.”
One of the first things drilled into me in journalism was “Smith thinks” should be recast to “Smith said he thinks.”
The C-suite is likely well aware of limitations, but shareholders like to hear about the hot new thing.
The thing is, the idea isn’t wrong. Automating complex tasks is a bitch, but the repetitive tasks that turn any job into a grind are prime candidates. The larger issue is instead of letting employees spend more time doing fulfilling activities because of increased efficiency, companies tend to do layoffs.
The problem is, this varies from person to person. My team divvies (or did, I quit not too long ago) up tasks based on what different people enjoy doing more, and no executive would have any clue which repeating tasks are repetitive (in a derogatory way), and which ones are just us doing our job. I like doing network traffic analysis. My coworker likes container hardening. Both of those could be automated, but that would remove something we enjoy from each of our respective jobs.
A big move in recent AI company rhetoric is that AI will “do analyses”, and people will “make decisions”, but how on earth are you going to keep up the technical understanding needed to make a decision, without doing the analyses?
An AI saying, “I think this is malicious, what do you want to do?” isn’t a real decision if the person answering can’t verify or repudiate the analysis.