Illustration by Aaliyah Diaz
Have you ever blindly copy-pasted something from ChatGPT? (Don’t worry, I won’t tell your professor.) Maybe you’re a goody-two-shoes who hasn’t, but chances are, you have asked AI a question and accepted its answer without a second thought. Be honest.
Don’t click off just yet, this is not another lecture about how you should “stop letting AI rot your brain” or “never use AI again,” because, the truth is, you can’t. Minute by minute, AI is becoming increasingly normalized in our society, making it nearly impossible for even the most anti-tech advocates to avoid it. Something that began as harmless convenience in our everyday lives is now demanding reliance — one that is not just influencing personal habits, but also policy decisions. The U.S. Department of Transportation’s decision to ease autonomous vehicle regulations reflects how AI’s growing normalization has created a premature belief in its infallibility, leading to deregulation and a focus on innovation over safety.
The lane assist that nudges your steering wheel back into place? AI. The blind spot monitoring that flashes a warning when a car sneaks up beside you? Also AI. From automatic emergency braking to cruise control, AI is embedded into the vehicles we drive, often in ways we barely notice. When something is so seamlessly integrated into our lives, we stop questioning how it works — or if it could be wrong. That is precisely the problem: we have grown so accustomed to AI being fast and convenient that we have started to assume it is also flawless. As we are about to see, this overreliance and illusion of perfection becomes a tangible issue when it starts shaping the laws meant to keep us safe.
On April 24, 2025, the National Highway Traffic Safety Administration (NHTSA) — a sub-agency of the U.S. Department of Transportation (DOT) — revealed its new Autonomous Vehicle (AV) Framework. DOT Secretary Sean Duffy stated that the updated framework intends to establish a national standard that “spurs innovation and prioritizes safety,” though I argue that it may spur innovation at the expense of safety.
One of the most striking components of the new framework is that it no longer requires automakers to report non-fatal crashes for Level 2 AVs (which includes any car equipped with features like lane assist, blind spot monitoring, or adaptive cruise control). Previously, manufacturers had to disclose when their AVs were involved in accidents — even if they didn’t result in death or injury — giving consumers vital insights into potential flaws.
Take a moment to think: if we stop tracking crashes, does that mean they’ve stopped happening? Of course not. It just means we won’t know about them. What we do know is that Tesla has accounted for over 800 of the 1,040 AV crashes reported to the NHTSA last year, most involving Level 2 systems like autopilot. Under the new rules, similar incidents won’t need to be reported unless lethal.
I know what you must be thinking. Why loosen safety regulations now, especially when historic data shows that AI systems are still flawed? The answer lies in a word that our society has come to idolize: innovation.
“This Administration understands that we’re in a race with China to out-innovate, and the stakes couldn’t be higher,” explains Duffy. In this race for technological innovation, policymakers are beginning to treat human safety as an afterthought. Deregulation is not happening because the AI is flawless; it is happening because acknowledging the flaws would mean slowing down. And the most alarming part? The more we blindly trust AI, without pausing to ask whether it’s right, the easier it becomes to justify stripping away oversight.
Now, say a company discovered its automatic emergency braking system was faulty, causing a few non-lethal crashes. Annoying, but not reportable under the new rules. Then imagine that same system fails again tomorrow, but this time, all passengers in the car die. Gruesome, I know. But who’s responsible? Not the manufacturer– they were not required to report the earlier, non-lethal crashes. Not the government — they chose to step back. So… the driver? The passengers? Who takes the fall when no one is held accountable?
AI will continue to evolve, and it should. But we must recognize that something new is not something perfect. The more we treat AI as infallible, the harder it becomes to determine who’s accountable when it fails. Deregulation, in this context, is like wrapping a flawed product in shiny paper and a bow: it creates the illusion of progress while hiding a lack of transparency and oversight.
Are we okay with trading transparency for convenience? Should we be able to compromise human safety for innovation? And is taking the time to get something right really such a bad thing? These are not rhetorical questions — they are choices our leaders are actively making.
Maybe you’re still going to ask ChatGPT to write your discussion post tonight. Fair enough, I can’t stop you. But next time you interact with AI, whether in a car that drives itself or a chatbot, ask yourself: is this really innovation, or a problem wrapped in shiny packaging? We can’t let convenience blind us to the consequences of unchecked technology.