Wednesday, July 6, 2022

Some Thoughts about the Future of the Linux Desktop

Now is time for a somewhat incoherent ramble about my thoughts about the Linux desktop.  Earlier today was a post on Hacker News regarding an announcement from the GNOME development team that GTK 5 will be exclusively for Wayland, thus dropping support for the venerable X Window System.  I have many thoughts about the transition from X11 to Wayland.  I won't share all of them, but while I believe that X11 is quite complex and a replacement will be welcome, I'm also concerned that many good aspects of the X11 ecosystem, such as the ability to run GUI applications remotely, the ability to choose between a wide array of window managers, and support for non-Linux operating systems such as the BSDs, will be lost partly due to the consequences of how Wayland is designed and also due to the effort required to convert window managers for X to Wayland compositors.

I'm also concerned about the influence that influential big players have on the Linux desktop community despite the feelings of the overall community.  We've seen this play out with the controversy behind GNOME 3 (which led to the splintering of the GNOME 2 userbase into three competing desktops: GNOME 3, MATE, and Cinnamon), the stewardship of the GTK toolkit (is it the GNOME toolkit, or should its maintainers keep it a general-purpose toolkit, recognizing that non-GNOME desktops and applications depend on it), and the adoption of systemd.  It seems that certain decisions have been foisted upon the Linux community.  Many of us are drawn to free, open source software because we don't like the decisions foisted upon us by Microsoft or Apple.  But to our dismay, we can't escape this even in the FOSS ecosystem.  "If you don't like it, then make a fork and modify the code," some people retort.  But one has to be really dedicated to learn and modify software that contains hundreds of thousands of lines of code.  Thus, many of us have no effective choice but to live with the changes, which leads to resentment, which leads to flamewars on social media.

But does it have to be this way?  Is there a way for technically-inclined users to get the desktops that they want?  I believe there is a pathway to get there, and the pathway is through embracing simple, modular, composable software rather than building large edifices and platforms that seek to directly take on major players like Apple, Google, and Microsoft.  I believe that we can learn from projects such as Smalltalk, Project Oberon, Plan 9, OpenDoc, and Microsoft's 1990s technologies (OLE, COM, ActiveX) and update this for the 2020s to build component-based GUIs.  I also believe that we can take lessons from the STEPS project, which was an effort from Alan Kay's Viewpoints Research Institute to build an entire operating system with a GUI in just 20,000 lines of code.  Reducing the amount of code needed to build a complete system may revolutionize open source software, since this may enable users to be better equipped to make contributions to the code base since the code would be easier to understand.  There would be far less complaining about GNOME, systemd, and other software projects if it were easier for users to respond to the common "if you don't like it, then fork it" retort by doing exactly that.

Tuesday, April 19, 2022

Plan for Studying for Level N3 of the Japanese Language Proficiency Test

It's been a while since I last made a blog post.  A lot has been going on, both in the world and also in my personal life, but I thought I'd share my plans for studying for Level N3 of the Japanese Language Proficiency Test (JLPT), which will be held on Sunday, December 4, 2022 provided that the pandemic won't worsen around that time.

I've been studying Japanese on-and-off for over 22 years, ever since I was a fifth grader.  In high school I attended a Saturday Japanese language school called Sakura Gakuen in Sacramento, and after graduating from Cal Poly with my bachelor's degree, I moved to Japan for eight months to intern at Fujitsu Laboratories Ltd. in Kawasaki.  Living in Japan was one of the greatest experiences in my life.  My Japanese skills improved dramatically while I was in Japan.  Unfortunately my Japanese skills laid stagnant since returning to America due to the demands of graduate school and other things happening in my personal life.  While my Japanese didn't worsen thanks to subsequent vacations to Japan as well as watching Japanese dramas and listening to Japanese music, my Japanese didn't dramatically improve.  I attempted Level N4 of the JLPT back in December 2012.  While I did very well on the vocabulary section (I credit that to spending over a year studying the Core 6000 Japanese vocabulary deck via Anki), and while I passed each individual section, I did not meet the overall passing bar for the exam; it was the listening section that was the most difficult for me and where I scored the lowest.

Recently I've been getting serious about studying Japanese again for personal and career reasons.  I've dreamed of becoming fluent in Japanese for over 20 years, and I want to put in the hard work to fulfill this dream.  Back in October I started taking a free Japanese course hosted by the Santa Clara Valley Japanese Christian Church.  The church offers Japanese classes at various levels and are taught by skilled Japanese teachers who are also native speakers.  I am currently taking the highest level offered, which uses Genki II as the textbook and meets every other week (due to the COVID-19 pandemic our courses were held online via Zoom).  The classes are at a gentle pace; there are no homework assignments or exams.  This fit well with my lifestyle, since at the time I started the class I was teaching a course on programming languages at San Jose State University.  The teacher is very friendly, and I enjoy interacting with her and the other students; since there are less than ten of us we're able to get individual attention during each biweekly lesson.

Since the school year at Santa Clara Valley Japanese Christian Church is ending and there's no summer instruction, I've thought about my next steps.  I want to take Level N3 of the JLPT.  My goal is to become fluent in Japanese, and part of this goal includes taking Level N1, the most advanced level, of the JLPT, which I want to take in either 2024 or 2025.  Since I have until December 4 to prepare for the N3 exam, and since my current textbook (Genki II) only covers JLPT N4 material, I'm going to need to increase the intensity of my studies.  However, I have more free time this year; I don't plan to teach during the rest of 2022.  Therefore I will be using some of my free time to prepare for level N3 of the JLPT.

Below is my plan:

  1. Finish Genki II, which covers the material needed to pass JLPT N4.  I should be able to finish this textbook no later than mid-July.
  2. Complete the entirety of Tobira, an intermediate-level textbook that is said to cover JLPT N3 material according to various online forums.  I plan to finish this textbook no later than the end of October.
  3. Beginning in September, begin taking practice JLPT N3 exams, making sure to brush up on weak points after each attempt.
  4. Throughout the next eight months I will be spending more time building my listening comprehension skills in Japanese.  I plan to do so by watching more Japanese dramas and movies, as well as using Japanese in conversation more.
  5. I am also studying the Mangajin series, which can be found here.  I recently saw a Hacker News post recommending the series.

I'm looking forward to making more progress with my Japanese studies and climbing each JLPT step until I make it to Level N1.  I'm glad I finally have the time again to take my Japanese studies seriously.


Monday, February 15, 2021

Note to Self When Setting up 2GB NVIDIA Jetson Nano: Ditch the Network Connection During Setup

During this Presidents Day weekend, I received a 2GB NVIDIA Jetson Nano that I purchased online from Micro Center about a week ago.  At the time of purchase, it was on sale for $49, but as of this time of writing it has reverted to its list price of $59.  I needed a CUDA-capable GPU for a side project I'm working on, but due to the ongoing inflation in GPU prices due to the effects of COVID-19, I figured that purchasing a Jetson Nano would be my best option for now.  Besides, if I need more horsepower than what the Jetson Nano provides, I could always rent a GPU instance from Amazon.

NVIDIA Jetson Nano Equipment

I followed the installation instructions from NVIDIA's official guide.  However, after installation, I found that I couldn't boot my Jetson Nano after shutting it down.  I ended up having to re-image my SD card.  The next time around, however, I decided to leave the included WiFi adapter disconnected from the Jetson Nano instead of having it connected during my first installation.  By doing this, I was able to restart from a cold shutdown without any problems.  I later connected my WiFi adapter, and my Jetson Nano works properly, whether the WiFi adapter is connected at power-up or after power-up.

I believe the problem had to do with a system update that occurred the first time I set up my Jetson Nano.  By not having a network connection the second time I set up my Jetson Nano, it did not update the operating system; thus I did not run into the same startup problem as before.

So, the summary is this: when you initially set up your 2GB NVIDIA Jetson Nano, do not connect it to the network until after it is set up.

Wednesday, January 13, 2021

Nice Interview of Professor Stephen Freund On Teaching Programming Languages at a Liberal Arts College

This is a nice interview of Professor Stephen Freund, who works at Williams College and specializes in programming languages.  I haven't had the experience studying at a liberal arts college, but I am familiar with undergraduate-focused universities; I earned my bachelor's degree at Cal Poly San Luis Obispo, and I recently taught a course on programming language paradigms and principles at San José State University.  It is always nice to read about others' experiences and to obtain advice.

Monday, October 26, 2020

Interesting Opinion Piece at The Chronicle

On my Facebook feed I saw an interesting opinion piece published on The Chronicle by André da Loba titled, "Is Deep Thinking Incompatible With an Academic Career?"  As a computer science researcher who is interested in long-term, speculative research projects, and also as a person who, like the author, grew up in a low-income family and was considered a "gifted student," this article resonates with me, and I recommend reading it.

Our economy promotes short-term gains and not long-term initiatives; I blame over 30 years of artificially-low interest rates for this. This short-term thinking has affected not only industry, but also academia.  The heyday of 1970s-era Bell Labs and Xerox PARC with their focus on inventing the future, which requires long-term, risky research, has long ended; it's all about getting something shipped next quarter. Academia is not much better with its grant cycle and its "publish or perish" demands.  I believe one of the biggest problems in modern American society is its structural disregard for the future. Instead of saving and planning for the future, we collectively spend and live like there's no tomorrow. But what happens when tomorrow comes? From the standpoint of research, where will tomorrow's inventions come from if there's so much emphasis on next quarter's earnings or the next performance review cycle?

Alas, we need to pay our bills, and so we adapt and make do. But I'm starting to think that there needs to be an "alt-economy" for researchers, scholars, and creators who want to create, build, and pursue scientific discovery without the pressures of modern industry and modern academia.  I'm always contemplating my career, and I'm considering pursuing this vision when it is time for me to make my next career move.

Saturday, October 17, 2020

Some Updates Regarding My Flexible, Composable, Libre Desktop Environment Project

Back in April I posted a proposal for a flexible, composable, libre desktop environment that will run on top of Linux and various BSDs.  Since April I have been fleshing out some of the design decisions, though I haven't started writing code yet, partly because there are other design decisions I want to make before coding, and also because there are some technologies that I still need to learn; I have experience building GUI applications (particularly using GTK and Glade), but I'm not as familiar with the lower levels of the graphics stack, and thus I need to gain more familiarity with 2D graphics programming in order to carry out this project.

Here are some key decisions I have made:
  • The desktop environment will be written in Common Lisp in order to take advantage of the Common Lisp Object System, a dynamic object system that supports multiple dispatch.  I feel that using a dynamic object system will make it easier for me to implement a component-based desktop environment.  I plan to write demo programs that are also written in Common Lisp.
  • The desktop environment will be for Wayland, which is expected to replace X for new GUI development in the future.
  • This desktop will be written using its own BSD-licensed GUI toolkit, written for Wayland (although support for non-Wayland backends may be possible) and also written in Common Lisp.  There seems to be few options for BSD-licensed GUI toolkits; GTK+, Qt, and GNUstep are under the LGPL.  Having a BSD-licensed toolkit will maximize its adoption.
  • This new GUI toolkit will be fully themable; programmers will be able to describe windows using an S-expression syntax.  For example, we could describe a window that contains the label "I'm a window!" and two buttons (one to change the color of the label and the other to close the window) using two S-expressions: one for describing the contents of the window, and one for describing its format:
; Content definition file
(window main
  (label i-m-a-window)
  (button change-color)
  (button close))


; Layout definition file
(window (main)
  :size (300 400))


(label (i-m-a-window)
  :text "I'm a window"
  :font "Helvetica"
  :font-size 20
  :position (20 5)
  :align left)


(button (change-color)
  :text "Change Color"
  :size (50 150)
  :position (100 100))


(button (close)
  :text "Close"
  :size (50 150)
  :position (260 100))


  • Underneath the GUI toolkit will be a 2D graphics system that renders directly to a Wayland pixel buffer.  I am still exploring possible design options, but I've always been intrigued by the Display PostScript system used by NeXTSTEP and Sun NeWS, and I personally love macOS's Quartz 2D graphics system, which uses the same graphics model as PDF.  I am leaning toward also using PDF as the 2D graphics model of this desktop environment.
As I mentioned before, I haven't started coding yet.  Because there are still many technologies I need to learn, as well as other responsibilities that I have, I anticipate development of my side project to be slow.  Nevertheless, I hope to have a working prototype of the desktop environment completed sometime in either late 2021 or early 2022.

Tuesday, October 6, 2020

Where Did Personal Computing Go Wrong: The Abandonment of User Empowerment, The Rise of "Big Platforms," and Solutions

Earlier today I found out about a documentary being created by Daniel Krasner, Matt Rocker, and Eric Gade called Message Not Understood, which plans to show the history of personal computing and how personal computing deviated from the values that pioneers such as Alan Kay (a former Xerox PARC researcher who worked on the Smalltalk programming language/environment and who greatly contributed to the development of the graphical user interface) imparted.

I believe that one value that was originally stressed by pioneers such as Alan Kay and Bill Atkinson (an early Apple employee who created Hypercard, a programming environment that was quite popular with non-software engineers before the Web became widespread) that has fallen by the wayside is the notion of empowering the user through technology.  Personal computers were supposed to be about making people more productive by having access to computational power at home.  Computational power was no longer controlled by gatekeepers; it could be harnessed by anyone with a few thousand dollars and the willingness to learn.  The transition from command-line interfaces to GUIs such as the classic Mac OS and Windows helped democratize computing by making them more accessible to people who found command line interfaces difficult to use.  When the World Wide Web started reaching people's homes in the mid- and late-1990s, this added an entirely new dimension of power: the ability to have access to large amounts of information and the ability to communicate with people throughout the world.

What I believe was the fundamental shift that transformed personal computing (and, by extension, the Web) away from empowerment is the realization from industry giants that the platforms that they built, whether they are operating systems, web browsers, or web services, were lucrative sources that they could exploit.  Making the user dependent on a platform resulted in additional revenue.  The platform has to be good enough for users to be able to perform their desired tasks, but the platform can't be so empowering that it allows the user to operate in a platform-agnostic way without imposing serious costs, both in terms of inconvenience and sometimes in terms of money (for example, it's harder to switch from iOS to Android when one has invested hundreds of dollars in iOS apps that must be repurchased for Android).  Microsoft realized how lucrative platforms are when it made a lot of money selling MS-DOS licenses in the 1980s, and we are all familiar with the history of Windows and Internet Explorer.  I am of the opinion that Apple shows favoritism toward its walled-garden mobile platforms over the relatively-open macOS, and macOS is becoming more closed with each recent release (for example, the notarization requirement in later versions of macOS).  In some ways Google Chrome is the new Internet Explorer, and Facebook is the dominant social networking platform.

I argue that innovation in personal computing has stagnated since the advent of smartphones in the late 2000s.  Windows 7 and Mac OS X Snow Leopard, both released in 2009, are still widely praised.  Even in the world of the Linux desktop, which has been less affected by commercialism (although still affected in its own ways), GNOME 2 received more praise than the controversial release of GNOME 3, which led to the development of forks such as MATE and Cinnamon.  In my opinion, there are no compelling advantages of Windows 10 (besides Windows Subsystem for Linux) and macOS Catalina over Windows 7 and Mac OS X Snow Leopard.  But why would the world's biggest platform maintainers spend money on innovating these platforms when they are already so lucrative today?  Coincidentally, any one with a compiler could write a Windows app or (in pre-notarization days) a macOS app, but distributing an Android or iPhone app requires paying for access to an app store.

In many ways computing has become just like the rest of the media industry; computing is a medium, after all.  There are plenty of technologists who work on their craft who write excellent software and who help push computing forward, but it's hard to compete against platforms with valuations worth ten or more figures.  But it's the same with media, literature, and film; for example, there is plenty of good music being created by passionate people, but unless they are backed by huge budgets, they'll never reach the Top 40, yet the Top 40 is often lowest-common-denominator stuff.

Could industrial research turn things around and make personal computing more innovative?  Unfortunately I am pessimistic, and this has to do with the realities of research in a global economy that emphasizes quick results.  (I personally feel this is fueled by our economy's dependence on very low interest rates, but that's another rant for another day.)  The days of Xerox PARC and Bell Labs where companies invested in research for the pure advancement of science are over; it's all about short-term, applied work that has promises of an immediate ROI.  Moreover, why would Google, Apple, or Microsoft fund research on making personal computing more empowering when such empowerment would threaten the bottom lines of these companies?

What's the solution?  I believe future innovation in personal computing that encourages user empowerment is going to have to come from academia, government, non-profits, non-VC-funded companies, or hobbyists; there is no incentive for the major platform companies to switch.  One idea that I'm thinking about is the development of some type of "PBS for personal computing" as an alternative to "big platforms."  I am fond of The Internet Archive, as it is an example of a non-profit that is so important to the Web, and I hope the situation at Mozilla improves.  I also believe another important step to user empowerment is making programming easier for casual users, and making it easier for casual users to contribute to open-source software.  Back when Alan Kay was at his Viewpoints Research Institute, researchers there worked on a project called STEPS, which attempted to build a fully-working desktop environment with a minimal amount of code by developing domain-specific languages for implementing each part of the system.  If I remember correctly, they were able to implement a complete desktop environment with just 20,000 lines of code, most of it written in domain-specific languages.  The purpose of STEPS was to develop a system that was just as understandable as the Apple II and Commodore 64 and MS-DOS environments of the past, yet was just as feature complete as modern desktop environments, which are implemented in hundreds of thousands or even millions of lines of code, which is too much code for one person to understand.