Author: Thom Holwerda
Source
All weekend, I’ve been mulling over a recent blog post by Fedora Project Leader Matthew Miller, which he wrote and published on behalf of the Fedora Council. Fedora (the KDE version) is my distribution of choice, I love using it, and I consider it the best distribution for desktop use, and not by a close margin either. As such, reading a blog post in which Fedora is announcing plans to make extensive use of “AI” was bound to make me a feel a little uneasy. Miller states – correctly – that the “AI” space as it stands right now is dominated so much by hyperbole and over-the-top nonsense that it’s hard to judge the various technologies underpinning “AI” on merit alone. He continues that he believes that stripped of all the hyperbole and techbro bullshit, there’s “something significant, powerful”, and he wants to make “Fedora Linux the best community platform for AI”. So, what exactly does that look like? In addition to the big showy LLM-based tools for chat and code generation, these advances have brought big jumps for more tailored tasks: for translation, file search, home automation, and especially for accessibility (already a key part of our strategy). For example, open source speech synthesis has long lagged behind proprietary options. Now, what we have in Fedora is not even close to the realism, nuance, and flexibility of AI-generated speech. ↫ Matthew Miller Some of these are things we can all agree are important and worthwhile, but lacking on the Linux desktop. If we can make use of technologies labelled as “AI” to improve, say, text-to-speech on Linux for those who require it for accessibility reasons, that’s universally a great thing. Translation, too, is, at its core, a form of accessibility, and if we can improve machine translations so that people who, for instance, don’t speak English gain more access to English content, or if we can make the vast libraries of knowledge locked into foreign languages accessible to more people, that’s all good news. However, Fedora aims to take its use of “AI” even further, and wants to start using it in the process of developing, making, and distributing Fedora. This is where more and more red flags are starting to pop up for me, because I don’t feel like the processes and tasks they want to inject “AI” into are the kinds of processes and tasks where you want humans taken out of the equation. We can use AI/ML as part of making the Fedora Linux OS. New tools could help with package automation and bug triage. They could note anomalies in test results and logs, maybe even help identify potential security issues. We can also create infrastructure-level features for our users. For example, package update descriptions aren’t usually very meaningful. We could automatically generate concise summaries of what’s new in each system update — not just for each package, but highlighting what’s important in the whole set, including upstream change information as well. ↫ Matthew Miller Even the tools built atop billions and billions of euros of investments by Microsoft, Google, OpenAI, Facebook, and similar juggernauts are not exactly good at what they’re supposed to do, and suck at even the most basic of tasks of providing answers to simple questions. They lie, they make stuff up, they bug out and produce nonsense, they’re racist, and so on. I don’t want any of that garbage near the process of making and updating the operating system I rely on every day. Miller laments how “AI” is currently a closed-source, black box affair, which obviously doesn’t align with Fedora’s values and goals. He doesn’t actually explain how Fedora’s use of “AI” is going to address this. They’re going to have to find ethical, open source models that are also of high quality, and that’s a lot easier said than done. Sourcing doesn’t even get a single mention in this blog post, even though I’m fairly sure that’s one of the two major issues many of us have with the current crop of “AI” tools. The blog post also completely neglects to mention the environmental cost of training these “AI” tools. It costs an insane amount of electricity to train these new tools, and with climate change ever accelerating and the destruction of our environment visible all around us, not mentioning this problem when you’re leading a project like Fedora seems disingenuous at best, and malicious at worst. While using “AI” to improve accessibility tools in Fedora and the wider Linux world is laudable, some of the other intended targets seem more worrisome, especially when you take into account that the blog post makes no mention of the two single biggest problems with “AI”: sourcing, and its environmental impact. If Fedora truly intends to fully embrace “AI”, it’s going to have to address these two problems first, because otherwise they’re just trying to latch onto the hype without really understanding the cost. And that’s not something I want to hear from the leaders of my Linux distribution.