Tuesday, October 6, 2020

Where Did Personal Computing Go Wrong: The Abandonment of User Empowerment, The Rise of "Big Platforms," and Solutions

Earlier today I found out about a documentary being created by Daniel Krasner, Matt Rocker, and Eric Gade called Message Not Understood, which plans to show the history of personal computing and how personal computing deviated from the values that pioneers such as Alan Kay (a former Xerox PARC researcher who worked on the Smalltalk programming language/environment and who greatly contributed to the development of the graphical user interface) imparted.

I believe that one value that was originally stressed by pioneers such as Alan Kay and Bill Atkinson (an early Apple employee who created Hypercard, a programming environment that was quite popular with non-software engineers before the Web became widespread) that has fallen by the wayside is the notion of empowering the user through technology.  Personal computers were supposed to be about making people more productive by having access to computational power at home.  Computational power was no longer controlled by gatekeepers; it could be harnessed by anyone with a few thousand dollars and the willingness to learn.  The transition from command-line interfaces to GUIs such as the classic Mac OS and Windows helped democratize computing by making them more accessible to people who found command line interfaces difficult to use.  When the World Wide Web started reaching people's homes in the mid- and late-1990s, this added an entirely new dimension of power: the ability to have access to large amounts of information and the ability to communicate with people throughout the world.

What I believe was the fundamental shift that transformed personal computing (and, by extension, the Web) away from empowerment is the realization from industry giants that the platforms that they built, whether they are operating systems, web browsers, or web services, were lucrative sources that they could exploit.  Making the user dependent on a platform resulted in additional revenue.  The platform has to be good enough for users to be able to perform their desired tasks, but the platform can't be so empowering that it allows the user to operate in a platform-agnostic way without imposing serious costs, both in terms of inconvenience and sometimes in terms of money (for example, it's harder to switch from iOS to Android when one has invested hundreds of dollars in iOS apps that must be repurchased for Android).  Microsoft realized how lucrative platforms are when it made a lot of money selling MS-DOS licenses in the 1980s, and we are all familiar with the history of Windows and Internet Explorer.  I am of the opinion that Apple shows favoritism toward its walled-garden mobile platforms over the relatively-open macOS, and macOS is becoming more closed with each recent release (for example, the notarization requirement in later versions of macOS).  In some ways Google Chrome is the new Internet Explorer, and Facebook is the dominant social networking platform.

I argue that innovation in personal computing has stagnated since the advent of smartphones in the late 2000s.  Windows 7 and Mac OS X Snow Leopard, both released in 2009, are still widely praised.  Even in the world of the Linux desktop, which has been less affected by commercialism (although still affected in its own ways), GNOME 2 received more praise than the controversial release of GNOME 3, which led to the development of forks such as MATE and Cinnamon.  In my opinion, there are no compelling advantages of Windows 10 (besides Windows Subsystem for Linux) and macOS Catalina over Windows 7 and Mac OS X Snow Leopard.  But why would the world's biggest platform maintainers spend money on innovating these platforms when they are already so lucrative today?  Coincidentally, any one with a compiler could write a Windows app or (in pre-notarization days) a macOS app, but distributing an Android or iPhone app requires paying for access to an app store.

In many ways computing has become just like the rest of the media industry; computing is a medium, after all.  There are plenty of technologists who work on their craft who write excellent software and who help push computing forward, but it's hard to compete against platforms with valuations worth ten or more figures.  But it's the same with media, literature, and film; for example, there is plenty of good music being created by passionate people, but unless they are backed by huge budgets, they'll never reach the Top 40, yet the Top 40 is often lowest-common-denominator stuff.

Could industrial research turn things around and make personal computing more innovative?  Unfortunately I am pessimistic, and this has to do with the realities of research in a global economy that emphasizes quick results.  (I personally feel this is fueled by our economy's dependence on very low interest rates, but that's another rant for another day.)  The days of Xerox PARC and Bell Labs where companies invested in research for the pure advancement of science are over; it's all about short-term, applied work that has promises of an immediate ROI.  Moreover, why would Google, Apple, or Microsoft fund research on making personal computing more empowering when such empowerment would threaten the bottom lines of these companies?

What's the solution?  I believe future innovation in personal computing that encourages user empowerment is going to have to come from academia, government, non-profits, non-VC-funded companies, or hobbyists; there is no incentive for the major platform companies to switch.  One idea that I'm thinking about is the development of some type of "PBS for personal computing" as an alternative to "big platforms."  I am fond of The Internet Archive, as it is an example of a non-profit that is so important to the Web, and I hope the situation at Mozilla improves.  I also believe another important step to user empowerment is making programming easier for casual users, and making it easier for casual users to contribute to open-source software.  Back when Alan Kay was at his Viewpoints Research Institute, researchers there worked on a project called STEPS, which attempted to build a fully-working desktop environment with a minimal amount of code by developing domain-specific languages for implementing each part of the system.  If I remember correctly, they were able to implement a complete desktop environment with just 20,000 lines of code, most of it written in domain-specific languages.  The purpose of STEPS was to develop a system that was just as understandable as the Apple II and Commodore 64 and MS-DOS environments of the past, yet was just as feature complete as modern desktop environments, which are implemented in hundreds of thousands or even millions of lines of code, which is too much code for one person to understand.

No comments:

Post a Comment