Customizable Software
TLDR: Lack of options ultimately stems from vertical integration locking-in bundles of software. Utopia solves the software monopoly coordination problem by requiring conformity to standard interfaces.
Prerequisites: None
Imagine if people were only allowed to buy (unisex) size 9.5 shoes. Combining men and women, size 9.5 is (approximately) the average, so it should serve many people, right? Well…
Eyeballing the chart above, it might even be better to only make size 11.5. Even if 9.5 is the average, it’s a bad compromise that satisfies few people. But also… it’s pretty crazy to make only one size of shoe! Regardless of what size it is, far less than 50% of people would be satisfied. Better to make a range of shoe sizes, and let people select what works for them.
The same principle applies for just about everything in life. There are no people who are average in all ways. Being able to eat the food that we like, read the books that make us happy, and choose where to live allows us to live much happier and satisfied lives. When we get into a new car to drive, we first adjust the seat and mirrors, and so it should be for everything else, when possible.
Know where customizable settings are really possible? Software. Compared to objects in “the real world,” things that are rendered on a screen have the potential to be changed and tweaked to a nearly infinite degree.
And yet, having the ability to deeply customize how your computer renders a document is very rare!
Right now you’re probably either reading this post on a website or in a mobile app. Does the service you’re using let you switch between light/dark mode? What about changing the colors more arbitrarily? Can you change the fonts or the font-size? Can you re-justify the text?
Why not? What stops you from being able to read in the way that makes you happiest? I can tell you from experience as a software engineer that it’s not a limitation on the computer. Many programs (e.g. Kindle) let you change these things. No, it’s a problem that ultimately stems from one fact: vertical integration is the default for new fields.
Designer Paternalism
“If I had asked my customers what they wanted they would have said ‘I wish for unlimited wishes.’”
It is an oft-spoken adage in design circles that customers don’t know what they want. One of the most famous design companies, Apple, has consistently looked better than their competitors in large part by enforcing that software on Apple computers have a specific look and feel that matches their standard. Perhaps customization isn’t so great, and is merely a crutch that inferior designers use to shift blame for bad design onto users.
While it’s certainly true that good design is important, consider that part of Apple’s appeal is in consistency. When using a new application, we want things to be where we expect; we want it to be like our favorite existing applications. One path towards that ideal is to enforce design standards on developers (like Apple does), but this path leads to a design monoculture where all users are forced to use the same kind of interface and all designers are basically prohibited from trying wild new things.
Customizability is also expensive. This is the reason that Henry Ford famously said “Any customer can have a car painted any color that he wants, so long as it is black.” Having only one color of car made assembly lines significantly cheaper to run. It wasn’t that Ford didn’t think customers knew what color they wanted, but rather he was betting that they would prefer cheaper cars to ones with a custom color.
But there is another path towards inexpensive-to-build interfaces that provide a consistent experience to users: letting people pick their favorite interface from a market of competitors. Consider text-editing, RSS, and email. There are a host of different programs for these domains, each with a different interface, some of which are very customizable. The thing that makes these programs interchangeable is that there is a standard format for text files, RSS feeds, and emails. I (usually) don’t have to switch to a different text-editor if I want to open different documents on my computer, and as a result almost all the documents I edit have a familiar interface.
Or at least, I have a consistent offline text-editor.
What’s With the Web
In my lifetime I have seen the web explode in both popularity and functionality. I used to edit all my files offline, but now I often use Google Docs, Roam, Substack, LessWrong, or Gmail to write longform, and that’s ignoring things like social media. Each webpage has, more or less, its own interface. That would be fine if these were web-based alternatives to offline text-editors, but they’re usually not. Each website is a little walled-garden that is fancy if it allows import/export at all (instead of merely copy/paste).
This curse of non-interoperability is not unique to text-editing, either. Have you ever tried to access Facebook and Twitter direct-messages in the same place (perhaps with SMS thrown in)? Do you know of a way to view YouTube shorts on TikTok? Heck, the only sensible way to ensure you can read two different people’s writing without switching interfaces is if both people happen to be on the same platform or if you subscribe to their RSS feeds.
Where did this curse come from? In the beginning the web was supposed to be full of simple documents, that would be lovingly rendered by your browser to be beautiful and perfect. Webpage creators could (in theory) focus on making content, and the web browsers could focus on displaying sites in a way that met user’s needs.
But the initial standards (hah! “standards”!) that the web used were garbage. Importantly, they didn’t provide enough flexibility for authors to create the kinds of pages that they wanted. It took 3 years to add a standard notation for tables(!), for instance. To serve the interests of both readers and writers, browsers like Netscape Navigator stepped in and offered new features such as pop-up windows, tracking cookies, and general purpose scripting in the form of things like Flash. These features (along with the more visible blinking/scrolling text and animated gifs) let designers produce masterpieces of the 1990’s, but also introduced pervasive security flaws that are exploitable to this day.
The mad dash to slap features onto websites, even at the cost of making them inaccessible, insecure, and sometimes downright ugly was both a huge failure to plan for the future and also a big part of the web’s success. By organically meeting demands as they emerged, the web rocketed past its initial designs as a document-sharing service, and became the high-power application platform that it is today.
Apps are Natural Monoliths
The web isn’t the real problem. After all, mobile and desktop apps almost never directly work with each other either. The web is just another platform for programmers to create cool products. The real issue is that making things that work by themselves is (relatively) straightforward and can be quite profitable, while making things with interfaces requires solving hard coordination problems and can often undercut profits.
Things like email, RSS, text files, and even the web itself are exceptions to the rule. Interacting software written by two different groups basically only ever emerges because an early standard is put down by some high-minded academic, or because a company invests heavily in becoming a platform that can be used by future developers. Trying to become a platform is a risky move — if not enough developers get on-board the expense might be wasted (and the company embarrassed). It’s risky for developers, too — if a platform disappears down the line, or is simply buggy, the product could be ruined.
In Zero To One, Peter Thiel writes that all successful innovators become, in a real sense, monopolists. The first group to invent something have the privilege of selling it without competition (aside from the natural “competition” of not-using-the-invention, of course). This privilege is the incentive for research; without it (most) new things wouldn’t get built. To maximize the time between breakthrough and facing strong competition, innovators want to ensure that developing another version their product involves risks and high up-front costs, not just in terms of money but in terms of time and talent.
(Aside: This is also true about monopolists in established fields. Regulations on a field can serve the interests of corporations already in it by imposing costs on would-be competitors who want to enter the field. A power company, for instance, can benefit from regulations on power companies thanks to new companies being required to spend hundreds of thousands of dollars on lawyers.)
For tech products, the larger the product, the harder it will be to reinvent. More features are better, from the monopolist’s point of view. If you build a social-media platform that’s highly extensible and can be viewed by third-parties, you may find yourself struggling to display as many advertisements as you would if you’d forced users to use your apps. And the more a product depends on existing platforms, the more leverage those platforms have to acquire or replace you. All of these pressures lead towards a world of monoliths concentrated in the hands of a few key companies that have the power to not only extract high rents from their consumers, but also force specific one-size-fits-all designs upon them.
Utopian Software
In Utopia, software is extremely modular. Activities that we think of as only involving two or three “pieces of software” very clearly use dozens of “pieces” in Utopia. This increase is not due to increased complexity when doing the same task, but rather the simple result of breaking down applications into clearly defined, interoperable components.
For example, writing an essay on a Utopian computer involves:
Code that reads a document from a long-term memory location (perhaps on a remote server) and imports it to local memory for editing.
Code that regularly updates long-term memory with document changes, maintaining a version history.
Code that renders a document along with editing-specific metadata such as cursor position and time-since-last-save.
Code that gets user input from a keyboard and translates it into general commands.
Code that takes general commands and directs them to various other programs such as the text-editor, clipboard, settings manager, or application manager.
Code that takes commands and a document and produces an updated document or an error. (This is “the text-editor”.)
Code that handles command errors by calling out to other bits of code, like sound-playing code, alert-rendering code, and auto-suggest code.
Each of these modules can, and will, interact with other modules in the course of doing their job. The code that renders the document, for example, might call out to another module that does styled text rendering, but doesn’t know about being an editor.
Modules are re-used all the time. For instance, code that renders text might be used when reading a book, using a spreadsheet, or for a menu in a game. Indeed, not only are modules interoperable between different applications, but users are free to swap out modules for competitors at will. Modules of a given type do the same job, but are not indistinguishable. If I replace the module I use to render sans-serif fonts, for example, the result might be quite striking.
When new software is installed, it searches around on the computer for the modules it needs to function, getting the user’s preferred modules when applicable and asking to download more software when needed. As a result, many pieces of software are smaller, and involve less reinventing the wheel. When users install a new module to replace an existing one they by-default update all applications that use that module, maintaining a consistent look-and-feel across their entire computer experience.
Utopia manages to have software made from cleanly separable parts by the creation and enforcement of a massive number of standards.
Code standards describe the interface between programs: what is given, and what is expected in return. For instance, the sorting
standard specifies that a finite-length list of any type of comparable object is given as input to the module’s sort
function, which then must return another such list, with a guarantee that the resulting list is in order and is a permutation of the input list. The standards specify how data types are specified and passed between programs, and attempt to be as rigorous as possible about what is expected of a module that fulfills its duties.
There are various institutions where modules can be verified as bug-free and non-malicious. Before software is installed, it is checked against a whitelist of such institutions, provided by the user.
Standards are gradually adopted by the government through a slow, rolling process.
First, individuals and organizations submit standards to state-sponsored competitions. These competitions are centered around improving things in a certain area, or for a certain application, rather than assuming how a standard should cut things into pieces. Submissions are then publicly ranked and compared by experts, industry leaders, politicians, and the general public. In this ranking “no new standard” is an option that is weighed.
The software industry then has some amount of time to begin building interfaces and conforming to whichever standard(s) they predict will be adopted. A standard is then chosen by the relevant agency, based on both the contest rankings and what the industry has been working towards supporting. After a standard is chosen it comes into effect a while later, with non-compliant software being subject to gradually increasing fees as long as it’s on the market. Free software (without ads, branding, or other financial motive) is not subject to most standards, though non-compliant free software tends to be rare, as users expect interoperable programs.
Non-compliant software is checked by a bounty system. Coders submit test suites that check modules to ensure they’re compliant, which are then approved (or rejected) by the government. When a new piece of for-profit software is submitted for inspection, if it fails any tests and the government agrees it’s in violation, the submitter and the author of the relevant test-suites both get a small reward based on how many users are estimated to have purchased the software.
Another branch of the standards agency is tasked with reducing the complexity of the body of regulations, including by repealing aging regulations or otherwise changing things for the better. Public prizes are regularly offered for anyone who can simplify the tech laws. This same part of the agency provides guides and manuals to help programmers orient to what’s expected, and is judged overall on how clear and simple things are.
Standards vary in size, and often overlap. For instance, if a particular feature is controversial, a standard may be rolled out that doesn’t weigh in on that feature, only for a later standard to come in and specify how it should be done. Many standards are aimed at cutting particularly large pieces of code into parts that can be re-used in other applications. Standards are generally aimed at applications that have existed for many years — not the cutting edge of software, which is mostly unregulated.