Monday, October 26, 2020

Interesting Opinion Piece at The Chronicle

On my Facebook feed I saw an interesting opinion piece published on The Chronicle by AndrĂ© da Loba titled, "Is Deep Thinking Incompatible With an Academic Career?"  As a computer science researcher who is interested in long-term, speculative research projects, and also as a person who, like the author, grew up in a low-income family and was considered a "gifted student," this article resonates with me, and I recommend reading it.

Our economy promotes short-term gains and not long-term initiatives; I blame over 30 years of artificially-low interest rates for this. This short-term thinking has affected not only industry, but also academia.  The heyday of 1970s-era Bell Labs and Xerox PARC with their focus on inventing the future, which requires long-term, risky research, has long ended; it's all about getting something shipped next quarter. Academia is not much better with its grant cycle and its "publish or perish" demands.  I believe one of the biggest problems in modern American society is its structural disregard for the future. Instead of saving and planning for the future, we collectively spend and live like there's no tomorrow. But what happens when tomorrow comes? From the standpoint of research, where will tomorrow's inventions come from if there's so much emphasis on next quarter's earnings or the next performance review cycle?

Alas, we need to pay our bills, and so we adapt and make do. But I'm starting to think that there needs to be an "alt-economy" for researchers, scholars, and creators who want to create, build, and pursue scientific discovery without the pressures of modern industry and modern academia.  I'm always contemplating my career, and I'm considering pursuing this vision when it is time for me to make my next career move.

Saturday, October 17, 2020

Some Updates Regarding My Flexible, Composable, Libre Desktop Environment Project

Back in April I posted a proposal for a flexible, composable, libre desktop environment that will run on top of Linux and various BSDs.  Since April I have been fleshing out some of the design decisions, though I haven't started writing code yet, partly because there are other design decisions I want to make before coding, and also because there are some technologies that I still need to learn; I have experience building GUI applications (particularly using GTK and Glade), but I'm not as familiar with the lower levels of the graphics stack, and thus I need to gain more familiarity with 2D graphics programming in order to carry out this project.

Here are some key decisions I have made:
  • The desktop environment will be written in Common Lisp in order to take advantage of the Common Lisp Object System, a dynamic object system that supports multiple dispatch.  I feel that using a dynamic object system will make it easier for me to implement a component-based desktop environment.  I plan to write demo programs that are also written in Common Lisp.
  • The desktop environment will be for Wayland, which is expected to replace X for new GUI development in the future.
  • This desktop will be written using its own BSD-licensed GUI toolkit, written for Wayland (although support for non-Wayland backends may be possible) and also written in Common Lisp.  There seems to be few options for BSD-licensed GUI toolkits; GTK+, Qt, and GNUstep are under the LGPL.  Having a BSD-licensed toolkit will maximize its adoption.
  • This new GUI toolkit will be fully themable; programmers will be able to describe windows using an S-expression syntax.  For example, we could describe a window that contains the label "I'm a window!" and two buttons (one to change the color of the label and the other to close the window) using two S-expressions: one for describing the contents of the window, and one for describing its format:
; Content definition file
(window main
  (label i-m-a-window)
  (button change-color)
  (button close))


; Layout definition file
(window (main)
  :size (300 400))


(label (i-m-a-window)
  :text "I'm a window"
  :font "Helvetica"
  :font-size 20
  :position (20 5)
  :align left)


(button (change-color)
  :text "Change Color"
  :size (50 150)
  :position (100 100))


(button (close)
  :text "Close"
  :size (50 150)
  :position (260 100))


  • Underneath the GUI toolkit will be a 2D graphics system that renders directly to a Wayland pixel buffer.  I am still exploring possible design options, but I've always been intrigued by the Display PostScript system used by NeXTSTEP and Sun NeWS, and I personally love macOS's Quartz 2D graphics system, which uses the same graphics model as PDF.  I am leaning toward also using PDF as the 2D graphics model of this desktop environment.
As I mentioned before, I haven't started coding yet.  Because there are still many technologies I need to learn, as well as other responsibilities that I have, I anticipate development of my side project to be slow.  Nevertheless, I hope to have a working prototype of the desktop environment completed sometime in either late 2021 or early 2022.

Tuesday, October 6, 2020

Where Did Personal Computing Go Wrong: The Abandonment of User Empowerment, The Rise of "Big Platforms," and Solutions

Earlier today I found out about a documentary being created by Daniel Krasner, Matt Rocker, and Eric Gade called Message Not Understood, which plans to show the history of personal computing and how personal computing deviated from the values that pioneers such as Alan Kay (a former Xerox PARC researcher who worked on the Smalltalk programming language/environment and who greatly contributed to the development of the graphical user interface) imparted.

I believe that one value that was originally stressed by pioneers such as Alan Kay and Bill Atkinson (an early Apple employee who created Hypercard, a programming environment that was quite popular with non-software engineers before the Web became widespread) that has fallen by the wayside is the notion of empowering the user through technology.  Personal computers were supposed to be about making people more productive by having access to computational power at home.  Computational power was no longer controlled by gatekeepers; it could be harnessed by anyone with a few thousand dollars and the willingness to learn.  The transition from command-line interfaces to GUIs such as the classic Mac OS and Windows helped democratize computing by making them more accessible to people who found command line interfaces difficult to use.  When the World Wide Web started reaching people's homes in the mid- and late-1990s, this added an entirely new dimension of power: the ability to have access to large amounts of information and the ability to communicate with people throughout the world.

What I believe was the fundamental shift that transformed personal computing (and, by extension, the Web) away from empowerment is the realization from industry giants that the platforms that they built, whether they are operating systems, web browsers, or web services, were lucrative sources that they could exploit.  Making the user dependent on a platform resulted in additional revenue.  The platform has to be good enough for users to be able to perform their desired tasks, but the platform can't be so empowering that it allows the user to operate in a platform-agnostic way without imposing serious costs, both in terms of inconvenience and sometimes in terms of money (for example, it's harder to switch from iOS to Android when one has invested hundreds of dollars in iOS apps that must be repurchased for Android).  Microsoft realized how lucrative platforms are when it made a lot of money selling MS-DOS licenses in the 1980s, and we are all familiar with the history of Windows and Internet Explorer.  I am of the opinion that Apple shows favoritism toward its walled-garden mobile platforms over the relatively-open macOS, and macOS is becoming more closed with each recent release (for example, the notarization requirement in later versions of macOS).  In some ways Google Chrome is the new Internet Explorer, and Facebook is the dominant social networking platform.

I argue that innovation in personal computing has stagnated since the advent of smartphones in the late 2000s.  Windows 7 and Mac OS X Snow Leopard, both released in 2009, are still widely praised.  Even in the world of the Linux desktop, which has been less affected by commercialism (although still affected in its own ways), GNOME 2 received more praise than the controversial release of GNOME 3, which led to the development of forks such as MATE and Cinnamon.  In my opinion, there are no compelling advantages of Windows 10 (besides Windows Subsystem for Linux) and macOS Catalina over Windows 7 and Mac OS X Snow Leopard.  But why would the world's biggest platform maintainers spend money on innovating these platforms when they are already so lucrative today?  Coincidentally, any one with a compiler could write a Windows app or (in pre-notarization days) a macOS app, but distributing an Android or iPhone app requires paying for access to an app store.

In many ways computing has become just like the rest of the media industry; computing is a medium, after all.  There are plenty of technologists who work on their craft who write excellent software and who help push computing forward, but it's hard to compete against platforms with valuations worth ten or more figures.  But it's the same with media, literature, and film; for example, there is plenty of good music being created by passionate people, but unless they are backed by huge budgets, they'll never reach the Top 40, yet the Top 40 is often lowest-common-denominator stuff.

Could industrial research turn things around and make personal computing more innovative?  Unfortunately I am pessimistic, and this has to do with the realities of research in a global economy that emphasizes quick results.  (I personally feel this is fueled by our economy's dependence on very low interest rates, but that's another rant for another day.)  The days of Xerox PARC and Bell Labs where companies invested in research for the pure advancement of science are over; it's all about short-term, applied work that has promises of an immediate ROI.  Moreover, why would Google, Apple, or Microsoft fund research on making personal computing more empowering when such empowerment would threaten the bottom lines of these companies?

What's the solution?  I believe future innovation in personal computing that encourages user empowerment is going to have to come from academia, government, non-profits, non-VC-funded companies, or hobbyists; there is no incentive for the major platform companies to switch.  One idea that I'm thinking about is the development of some type of "PBS for personal computing" as an alternative to "big platforms."  I am fond of The Internet Archive, as it is an example of a non-profit that is so important to the Web, and I hope the situation at Mozilla improves.  I also believe another important step to user empowerment is making programming easier for casual users, and making it easier for casual users to contribute to open-source software.  Back when Alan Kay was at his Viewpoints Research Institute, researchers there worked on a project called STEPS, which attempted to build a fully-working desktop environment with a minimal amount of code by developing domain-specific languages for implementing each part of the system.  If I remember correctly, they were able to implement a complete desktop environment with just 20,000 lines of code, most of it written in domain-specific languages.  The purpose of STEPS was to develop a system that was just as understandable as the Apple II and Commodore 64 and MS-DOS environments of the past, yet was just as feature complete as modern desktop environments, which are implemented in hundreds of thousands or even millions of lines of code, which is too much code for one person to understand.