Tuesday, April 21, 2020

Nice Blog Post about Composable Software

I just found this 2013 blog post from Paul Chiusano advocating composable software, including for web applications.  A lot of the ideas overlap with the ideas I discussed in my previous blog post proposing a new desktop environment for Linux/BSD that emphasizes composable software.

Sunday, April 19, 2020

A Proposal for a Flexible, Composable, Libre Desktop Environment

Note: I posted a better-formatted version of this document here as a PDF file.  Unfortunately there are some formatting glitches with Blogger's HTML editor.

Disclaimers:

  1. This is not an official project. This document describes my thoughts about a desktop environment intended for Unix-based operating systems that is libre (i.e., free software per the definition of the Free Software Foundation), is composable (where users can create command-line and GUI tools by connecting smaller tools together), and is flexible (where software tools do not impose a particular user interface, allowing the user to modify the UI of the tool to best suit the user's preferences). Whether or not I will work on it is something I still need to consider, but I'm sharing my thoughts for feedback to see if pursuing this as a side project is worthwhile.
  2. I will be expressing many opinions in this document. In the words of LeVar Burton, "You don't have to take my word for it."

Problems with Today's Desktop Environments and Applications

  1. Smartphone- and tablet-based UI/UX metaphors have been inappropriately applied to some desktop environments, resulting in a loss of usability compared to the desktop environments of the 2000s. This is especially apparent in Windows 8, GNOME 3, and (to a lesser extent) Windows 10. When Apple introduced the iPhone and iPad in 2007 and 2010, respectively, there was much talk in the personal computing world about mobile computing replacing desktop computing. The developers of Windows and GNOME were heavily influenced by this thinking, and they sought to develop versions of their desktops that aimed to be suitable for both desktop and mobile computing. Now, I must commend the developers of Windows and GNOME for taking risks. Windows 7 and GNOME 2 were well-received by many people, and it was a gamble changing these environments. The results were Windows 8 and GNOME 3. While these environments were well-received by mobile users, some desktop users were disappointed, feeling that the user experience was a downgrade from Windows 7 and GNOME 2. For Windows this led to people refusing to upgrade from Windows 7, and for GNOME this led to the fracture of the GNOME-based desktop community into GNOME 3, MATE, and Cinnamon, with both MATE and Cinnamon aiming to serve those alienated by GNOME 3’s changes. I believe the lesson in this is that developers of desktop environments should respect the fact that desktop computing has fundamentally different use cases than mobile computing, and trying to create a common interface winds up in misapplying UI/UX metaphors.
  2. The UI/UX design fads of the 2010s, including “flat design” and the gratuitous use of screen space, are a usability regression from the desktops of the 1990s and the 2000s. Consider the Windows 95 and Mac OS 7.5 interfaces. It is largely clear to see which elements are clickable and which ones are not. This held true as late as the late 2000s with Windows 7 and Mac OS X 10.6 Snow Leopard. Contrast that with the flat interfaces of a lot of software products today where it’s much harder to visually determine which elements are clickable and which ones are not. It’s not just flat design that’s problematic; there are other design decisions I disagree with. In macOS, what were once easily-visible scroll bars that were colored in bright blue have been replaced with thin, gray scroll bars that are harder to use, with the assumption that we’ll be using our mouse’s scrollwheel or our laptop’s touchpad’s scroll gestures instead of the actual scroll bar. In Windows 10, the title bars look excessively large relative to the menu bar (the reason for the large bars is to be able to move the window in a touchscreen interface; this is an example of a design decision that would be appropriate for mobile computing but is unnecessary in desktop computing), and its windows in new-style programs often consume large amounts of whitespace. I would love to be able to switch to Classic mode (i.e., a Windows 2000-style interface) in Windows 10; on Windows I feel most productive in Classic mode. There’s just one problem….
  3. Modern desktop environments and applications are increasingly curtailing the ability for users to control the appearance of their desktop environment and their applications. For Mac users this is not a new development. Ever since the transition from Mac OS 9 to Mac OS X in 2001, Apple has not provided mechanisms for users to apply themes that are different from the Mac OS X Aqua interface. Windows, however, used to support many modifications to its default themes. This changed in Windows 10 when it became more difficult to theme the desktop environment. In 2019 some GNOME developers wrote an open letter urging Linux distributions to not apply custom themes to their applications. Here is a key excerpt from the letter:
    “On a platform level, we believe GTK should stop forcing a single stylesheet on all apps by default [emphasis original]. Instead of apps having to opt out of this by hardcoding a stylesheet, they should use the platform stylesheet unless they opt in to something else. We realize this is a complicated issue, but assuming every app works with every stylesheet is a bad default.”
    Although the signatories of the open letter have explicitly stated that they are not opposed to end-users “tinkering” with the style of their applications, I feel that their suggestion to require GTK applications to explicitly opt into theming will, if implemented, make it more difficult for users to apply themes to their desktop environments and applications.
  4. Many desktop environments and applications lack the ability for users to customize the UI based on their needs and preferences. UI/UX decisions are a major cause of complaints about software. In some situations users respond by rejecting that software, instead seeking out alternatives. In other situations, though, sometimes the user doesn’t have a choice, but instead must learn how to cope with the UI.
    But what if users had another choice? What if users were able to modify the UI of their software as they saw fit without resorting to modifying its source code? For users who are not comfortable with adjusting their UI settings, what if they could download UI configurations from a repository of user-submitted configurations, including from UI experts who ran formal usability tests? This would increase user satisfaction with software products, since users won’t feel that they have to accept the UI decisions that were made by the product’s developers and designers.Microsoft took a step in the right direction by allowing its ribbon in Microsoft Office to be user-modifyable. When the ribbon was introduced in Office 2007, it had few configuration options, and it was controversial among long-time Office users, particularly since there was no way to switch back to the menu-and-toolbar-based interface of Microsoft Office 2003. However, while later versions of Microsoft Office still do not provide a means to return to menus and toolbars, the ribbon has been made to be much more customizable.

Monolithic Applications versus Composable Tools

Contemporary desktop environments promote the use of monolithic applications where the application itself is expected to provide the functionality that users need in order to perform a task. Often these applications are “silos,” where they tend to not interact well with each other unless they are part of a common suite of applications such as Microsoft Office and the Adobe Creative Suite. While some of these applications may provide internal scripting support (such as Microsoft Visual Basic for Applications), most applications don’t provide external scripting support (e.g., the ability for a bash script or a Python program to be able to access Adobe Photoshop’s image cropping functionality in order to programmatically crop images).
I contrast this with the traditional Unix approach of combining small tools to perform large tasks. While there are many tenets of the Unix philosophy, there are three tenets that I will emphasize the most:
  • There is no distinction between user and programmer.
  • Programs should do only one thing, and do them well.
  • Users are encouraged to combine small tools into larger tools using mechanisms such as pipes, I/O redirection, and shell scripting instead of developing large, monolithic applications that perform multiple tasks.

This philosophy is expressed and taught in the 1984 book The Unix Programming Environment by Brian Kernighan and Rob Pike, Bell Labs researchers who have played a major role in the development of Unix.
The idea of a user environment as a suite of composable tools is not inherently limited to command-line environments. OpenDoc was a project spearheaded by Apple, IBM, and other companies in the mid-1990s that encouraged software vendors to develop and sell components than can be combined by users and other developers to create larger solutions that were either in the form of a document or a larger application. The business goal was to challenge the dominance of large, monolithic applications by creating an ecosystem of smaller, composable utilities, allowing for more software companies to be able to compete in the software marketplace and also providing users and developers increased flexibility in their workflows. These components would run on the classic Mac OS, Apple’s eventually-cancelled Copland project, IBM OS/2, and other supported operating systems. Unfortunately, other than the influential Cyberdog web browser and a small handful of other OpenDoc components, OpenDoc did not last very long in the marketplace, and its impact was limited. OpenDoc’s development stopped in 1997 when Apple cut many engineering projects in order to focus on adapting the technology from the newly-acquired NeXT to its operating system strategy, which ultimately led to the release of Mac OS X 10.0 in March 2001.
Despite this setback, I believe OpenDoc was a victim of Apple’s circumstances, and I believe the ideas of OpenDoc should be re-explored for today’s desktop software.

Composable Tools Are Objects

One important key to building composable tools that work in programmatic, command-line, and GUI environments is using objects. OpenDoc was a C++ API backed by IBM’s System Object Model. However, there are rich dynamic object models that we can explore as alternatives, including Smalltalk’s derivatives such as Squeak and Pharo, Objective-C (which was heavily influenced by Smalltalk), and the Common Lisp Object System. By using dynamic objects as the foundation for software components, we can overcome the limitations of Unix’s pipeline approach to program composition, which relies on the weak link of parsing streams of text, and we can take advantage of the flexibility that dynamic objects provide as opposed to static objects. In fact, we can think of Unix utilities as objects with a run() method that accepts the utility’s command-line arguments and outputs a string. By explicitly expressing tools as objects, we allow for a much wider range of inputs and outputs that are not limited to text streams, resulting in a richer experience.

Separating UI from Core Functionality

A very important design tenet when developing composable tools is to separate the tool’s user interface from the tool’s core functionality. By not tightly coupling the UI with the underlying functionality, it is easier for the tool to be used under a variety of circumstances, whether those circumstances are (1) being invoked as an API call, (2) being run as a command line utility, (3) being run as a desktop GUI application, or even other circumstances.
As an example, suppose I want to provide a component that supplies a calendar. The object implementing the core functionality would supply methods such as retrieving the days of the week, figuring out whether the current year is a leap year, computing the numbers of days between two dates, storing events into a provided database, exporting to iCalendar format, etc. The Unix cal utility and GNOME Evolution can be rewritten to use the calendar component. This is how the same core calendar functionality can be used in a variety of settings, with the benefits of being able to use the same functionality without being restricted to a particular application.
This is also a guideline for converting existing software. Suppose GIMP’s core functionality were separated from its interface. This will allow for the creation of programs and command-line utilities to leverage GIMP’s image manipulation features without having to open the GIMP GUI application. This will also allow for the easier development of alternative UIs for GIMP, especially when combined with the ability for users to be able to customize the UI themselves.

UI Customizability and Themability

How will users be able to customize the GUI? I envision this to be a combination of two technologies:
  1. OpenDoc’s ability to merge components in a free-form style as part of a visual container structure known as a Document.
  2. With the exception of Microsoft Office 2007, since at least Microsoft Office 97 there has been extensive support for users to be able to change menus, toolbars, and/or the ribbon as they see fit.
How is this exposed programmatically? Each GUI component exports methods that implement some type of command. For example, a text editor would have commands corresponding to “Save File,” “Find/Replace,” “Delete Specified Lines,” etc. These commands would correspond to either menu items or toolbar buttons. Users can then modify how menus look, whether to use icons or words to describe commands in toolbars, whether to have a horizontal toolbar or a vertical one, etc.
All UI elements would be implemented under a common framework, and the framework chosen or developed will allow theming.

Not a New Operating System

For some time I’ve thought of the idea of creating either a Smalltalk-based operating system (e.g., imagine Pharo running on bare metal instead of as a siloed VM) or a Lisp operating system influenced by Symbolics Genera. The underpinnings of these systems would make an excellent base for implementing the ideas discussed in this document.
However, there are two main challenges with this approach of creating a new operating system:
  1. A new operating system will lack device drivers. One of the things that hinder the development of non-Linux libre operating systems such as Plan 9, Haiku, and ReactOS is their relatively limited driver support. It will be a major effort writing device drivers for a new operating system.
  2. There is also the “chicken-and-egg” problem of switching to a new operating system. Without certain key components such as a web browser (which will require porting Firefox or Chromium, a large effort, or creating a new web browser, which is an even larger effort), it would be hard to convince people to switch to the new operating system. But if the operating system has few users, then developers would be less likely to develop for it.
As much as I’d love to use a Smalltalk or Lisp operating system, I believe that instead of a new operating system, I think the best approach would be to leverage the libre GNU/Linux/BSD ecosystem and to build on top of it. This solves both the problems of device problems and the “chicken-and-egg” problem. Users can still run existing applications and use existing tools side-by-side with new components. These new components can even leverage the same GUI toolkits of existing applications in order to ensure overall system consistency.

Decisions to Make: Object Systems and GUI Frameworks

Two core decisions I’m considering are the object system and the GUI framework. For the object system I am enamored by the powerful Common Lisp Object System and it would allow for Common Lisp’s impressive live debugging features, but Objective-C is appealing due to its ability to call C and C++ code without the use of any wrappers. For the GUI framework I am partial to GNUstep due to its Objective-C foundation, which supports dynamic dispatch and would thus make it easier to implement component-based systems. Using GNUstep also has the side bonus of being able to bring macOS users into the fold and by providing native support for macOS.

Conclusion

Today’s desktop environments and applications suffer from a lack of flexibility, a lack of customizability, and a lack of composability among applications. This document proposed a new type of desktop environment and approach to application development that emphasizes components, objects that can be composed in ways that are even more powerful than Unix’s composable command-line tools. This document also advocates the development of GUI components that can be themed and also have malleable user interfaces. This new desktop environment will be built on existing Unix-like operating systems such as Linux and BSD.

Monday, June 17, 2019

Some Thoughts About GNUstep

Since 2004, when as a high school junior I was first exposed to Unix-based operating systems such as Linux and Mac OS X, I've been interested in GNUstep.  GNUstep is a free, open source implementation of the OpenStep API from NeXT, which later evolved into the Cocoa API, which is used for the creation of macOS applications.  Over the years, GNUstep's mission has evolved to striving to keep up with the additions made to Cocoa in each passing version of macOS.  However, as of this time of writing, GNUstep only guarantees compatibility with up to Mac OS X 10.4 Tiger, which was released 11 years ago; the current version of macOS is macOS 10.14 Mojave, with macOS 10.15 Catalina coming out later this year.

Since GNUstep's conception in the mid-1990s, many people have envisioned a Linux desktop environment powered by GNUstep, whether that be a faithful modern-day workalike of either NeXTSTEP or macOS, or perhaps a completely different desktop environment such as Étoilé, which has its own design and UI guidelines.  However, as of 2019, this vision still remains a dream, with GTK+-based desktops such as GNOME, Mint, and Cinnamon being dominant among Linux desktop users, as well as the original Linux desktop environment: Qt-based KDE.  Some people, including myself, have lamented the fact that GNUstep's progress has been slow relative to these more popular desktops.

This is my opinion, but I believe the following are the reasons why KDE and GNOME ended up taking off while GNUstep's development has been relatively slow for the past two decades:

  1. KDE was announced in 1996 during GNUstep's infancy.  Out of all of the GUI toolkits that were available for free, open source software developers for Linux in 1996, Qt was the only one available that satisfied Matthias Ettrich's needs.  Work ended up starting on KDE, and according to Wikipedia, KDE 1.0 was released in July 1998.  Unfortunately, Qt's license at the time was incompatible with the GNU General Public License, one of the major licenses used by many free, open source software projects.  While many Linux users did not find this objectionable, other Linux users felt otherwise, which ultimately led to the announcement of the GNOME desktop project in August 1997, which was based on the GTK+ toolkit, which was based on The GIMP image editing application and which released its first stable version in April 1998.  GNOME would eventually release its first version of its desktop environment in March 1999.  However, while all of this was taking place, GNUstep was still not finished with implementing the original OpenStep API.  Had GNUstep been ready in 1996 or 1997, there's a strong likelihood that someone like Matthias Ettrich would have built a desktop around it.
  2. Cocoa is a moving target, with changes being made to the API once every year or two on average.  Unfortunately, GNUstep does not have the personnel needed to keep pace with Apple's changes, similar to how the Wine and ReactOS projects are perennially behind Microsoft Windows or how long it took Haiku, a clone of BeOS made by volunteers, to reach beta status (and BeOS has been dead for nearly 20 years!).  My understanding is that GNUstep has been developed entirely by volunteers throughout its history.  By comparison, the GNOME desktop has a long history of corporate backing, and the aformentioned Qt framework used by KDE is commercially developed.  Unfortunately, the fact that GNUstep is over a decade behind macOS in terms of compatibility with Cocoa deters developers who want to use modern, up-to-date GUI frameworks.
  3. GNUstep, being based on OpenStep, is an Objective-C framework, while GTK+ is based on C and KDE is based on C++.  When the GNUstep project started, Objective-C was considered a niche language, and even to this day Objective-C is mostly used by developers of NeXT/Apple platforms.  The pool of Objective-C developers is considerably smaller than those of C and C++ developers.  Plus, with the increased importance of the Swift programming language, there's a chance that Apple may deprecate Objective-C in favor of Swift in the future, further reducing the pool of Objective-C developers.
Even with the challenges that GNUstep faces, I'm still holding out hope that GNUstep will increase in popularity and that GNUstep will one day reach API compatibility with newer versions of macOS, which would make the framework more attractive to developers.  I also hope that the Étoilé project will get restarted in order to bring a modern GNUstep-based desktop to Linux.  Given the increased discontent that some macOS users have over the state of their platform, it would be nice if there were a similar alternative available based on GNUstep.

Tuesday, June 11, 2019

My 2019 Mac Pro Disappointment and Thoughts of a New Operating System


I believe that Apple's announcement of the 2019 Mac Pro at the 2019 Apple Worldwide Developers Conference has finally brought much clarification regarding Apple's position on the Mac.  During the years of 2016 and 2017, Mac users like myself felt that Apple has abandoned the desktop Mac market, particularly for pro users.  After all, the Mac Mini went for years without a refresh after the 2014 model was released, and the previous-generation Mac Pro hadn't been updated since its release in 2013.  But after years of silence, Apple finally broke its silence in April 2017 by doing something that hasn't been done since unrealized murmurs of a PowerBook G5 back in 2004 and 2005 before the Intel switch occurred: Apple announced that a new, modular Mac Pro was in the works and that it would be released at a later date.  2017 and 2018 did not come with any Mac Pro product releases other than a drop in price for the 2013 Mac Pro, but it did come with the release of the iMac Pro and the long-awaited refresh of the Mac Mini.  Finally, on June 3, 2019, Apple finally announced the highly-anticipated new Mac Pro model, which is a user-serviceable, upgradeable, and expandable tower computer that is reminiscent of the Power Mac G5 and the 2006-2012 Mac Pro.


While I believe that this announcement has shown that Apple is still committed to the Mac and that Apple is willing to make very powerful machines for its most technologically demanding customers, I also believe that the new Mac Pro is a disappointment for some Mac users (including myself), and that the implicit statements that Apple is making about its Mac product line has some unfortunate implications for users such as myself.

For about two decades, Apple sold entry-level Power Macintosh and Mac Pro models at the inflation-adjusted price point of $2,500-$3,000.  The Power Macintosh and the Mac Pro were (and still are) Apple's models that provide user-serviceability, upgradability, and (with the exception of the 2013 "trash can" Mac Pro) internal expandability.  Starting with the 2008 MacBook Air and the 2012 Retina MacBook Pro, Macs gradually became less user-serviceable.  RAM started to get soldered onto the motherboard, and batteries became more difficult to replace.  This then started to spread to Apple's consumer desktops: the 2014 Mac Mini has soldered RAM, and many models of the iMac also have soldered RAM.  The 2016 MacBook Pro was the first Mac to have soldered storage, making it impossible to remove the storage device from the computer, which is important for data recovery.  The 2018 Mac Mini thankfully no longer has soldered RAM (although RAM installation must still be done by an Apple-authorized repair center), but it has soldered storage.  Users who wanted user-serviceability and upgradability were pointed to the Mac Pro, Apple's only model that offers these things.

However, when Apple announced the 2019 Mac Pro, it announced a starting price of $5,999, which is double the $2,999 starting price of the previous-generation Mac Pro, making it the highest-priced entry-level Power Macintosh or Mac Pro since the mid-1990s.  For users of previous-generation entry-level Mac Pro models (like myself; I own an entry-level 2013 Mac Pro that I bought in April 2017 after Apple discounted its price), this news is disappointing since $5,999 is a tremendous leap from $2,999.  I was prepared for a $2,999 or even $3,499 announcement, but not for a $5,999 one, which is well beyond my budget for a computer.  Unfortunately, I'm left with the following options when it is time to upgrade, none of which are appealing for me:
  • Sacrifice user-serviceability and upgradability by purchasing a Mac Mini or iMac.  However, user-serviceability and upgradeability are very important for me.  I would like to take advantage of falling prices over time in order to upgrade my computer whenever its necessary over time instead of having to guess my anticipated needs for the next few years and having to buy the upgrades up front at today's prices, not to mention that Apple charges a considerable sum for upgrades.  Moreover, non-serviceability precludes easy repair and easy data recoverability.
  • Scrimp and save for a 2019 Mac Pro.  Don't get me wrong; the 2019 Mac Pro is an excellent machine.  I would love to have one if I had the money.  But $6,000 is steep for a personal computer even on a Silicon Valley computer scientist's salary.
  • Switch to Windows 10 or Linux.  After the controversial 2016 MacBook Pro was released, I actually promptly purchased a refurbished ThinkPad T430 at Fry's for less than $150 in order to reacquaint myself with Windows, which I haven't used regularly since the Windows XP days.  My assessment of Windows 10 is that while its technical underpinnings are solid and its Windows Subsystem for Linux had made it possible for me to do Unix-style programming on Windows (which is one of the reasons why I use macOS: the fact that it is Unix underneath), unfortunately I find the interface gaudy (and the fact that my ThinkPad had a 1366x768 screen didn't help matters since Windows 10's interface seems to be optimized for high-resolution displays), and I find the advertisements, telemetry, and mandatory updates very annoying.  I also tried various Linux distributions, including KDE Neon and Linux Mint (which I currently use).  While I can be productive in Linux, I still find myself missing macOS.  I miss programs like Dictionary.app and Photos.app, and a recent update to Linux Mint 19 has somehow made Japanese text input no longer work with Firefox (although it works with other applications).  I prefer the various Linux desktops like MATE and KDE to Windows 10, but I love the Mac's attention to detail, especially when it comes to font rendering.  I can make do with either Windows 10 or Linux, but I find myself more productive in macOS.  I find that macOS provides a more polished, more consistent, less buggy, and far less annoying experience than Windows 10 or Linux.
  • Build a Hackintosh.  While I find the prospect of using macOS on PC hardware intriguing, unfortunately this is a non-starter for me.  I don't want to sound sanctimonious, but as a professional in the tech industry, I want to respect software licenses, even though I feel that users should be able to have the freedom to install whatever operating system they want on their hardware.  Also, for users who have no qualms with violating the macOS EULA, there are other challenges such as getting iCloud and iPhone integration to work properly on Hackintoshes, and there's also the prospect of Apple rendering Hackintoshing extremely difficult or impossible to do in the future through the use of Apple's T2 chip, which has been included in every Mac that has been redesigned since 2017 with the introduction of the iMac Pro.
Given these options, I'll express a lament about the state of personal computing these days: you can legally have a polished OS tied to restrictive hardware (unless you have $5,999 to shell out for a Mac Pro) or an unpolished OS running on a wide variety of hardware with varying degrees of freedom regarding user-serviceability, upgradeability, and expandability.  However, you can't have both (unless you want to build a Hackintosh): a polished OS running on the hardware of your choice.

What would it take for a new competing operating system to emerge, one that is not restricted to a particular vendor's hardware and yet is polished?  Unfortunately it will take a very large amount of work in order for it to be at par with even Windows 10 and desktop Linux distributions.  Below are the most formidable problems such an effort will bring:
  • There's the classic chicken-and-egg problem of software availability and user adoption: developers are less likely to develop for a new platform unless they're convinced it will attract a significant amount of users, and users are less likely to adopt a new platform if there are no software tools available for them to do their desired tasks.
  • There's the sheer amount of time and resources needed to create a modern operating system from the ground up.  All of our modern desktop operating systems (macOS, Windows 10, desktop Linux distributions) were evolved over many decades.  I believe the last semi-successful example of a consumer OS being built from scratch was BeOS, which was built in the 1990s; I say "semi-successful" because it did gain a cult-following among the users it attracted but it ultimately failed in the marketplace.  Apple's Taligent and Copland projects, both from the 1990s, were radical attempts to build new consumer operating systems from the ground-up, but they were never finished despite the amount of resources these projects were given.
  • Obtaining hardware support for a new operating system is challenging.  Many hardware vendors do not publicly provide the documentation needed for independent developers to write device drivers supporting their hardware, and many vendors are only willing to provide closed-source drivers for popular operating systems (see the chicken-and-egg problem above).  It is possible to perform reverse-engineering to create device drivers, but this is difficult to do with complex hardware and sometimes requires substantial resources.
  • Even though the rise of the Web and mobile computing has made platforms less important today than they were in the 1990s, there is still a need for native software, and there's also still a need to interact with dominant file formats and protocols.  Part of the reason why desktop Linux (and even the Mac for that matter) has struggled for adoption is compatibility with certain, more popular software packages.  Consider how long projects like GIMP and LibreOffice have existed and how they still struggle against more dominant products like Adobe Photoshop and Microsoft Office, respectively.  Part of these struggles include dealing with the files created by dominant software packages, which are often encoded in proprietary formats.  And projects that work specifically on application interoperability, whether at the source or binary level, tend to struggle.  Consider the long struggles of projects such as Wine (a Win32 compatibility layer for Unix-like operating systems), ReactOS (a Windows clone), and GNUstep (a reimplementation of the Cocoa API used in macOS, which is derived from the OpenStep API from the NeXT era).  Now, Windows and macOS are moving targets and thus those aforementioned projects will continue to need to play catch-up and with fewer resources than their corporate counterparts, but unfortunately it took a long time for projects like FreeDOS (a FOSS clone of MS-DOS) and Haiku (a FOSS clone of BeOS) to become mature enough to be useable, largely due to the small amount of resources these projects have relative to the resources that were available to develop the original systems.
  • If the new operating system is commercial, then how do we develop a sustainable business model, especially in a world where people expect software such as operating systems to be free?  If the new operating system is open source, then how do we attract and retain developers?
Now, it is possible to mitigate some of these concerns by building on the work of others.  For example, we can use Linux or one of the BSDs in order to avoid having to write an operating system kernel and also to dramatically reduce the number of drivers that would have to be written.  In fact, this is what Google and Apple did to create Android and macOS, respectively; Android uses a modified Linux kernel, while macOS, derived from NeXTSTEP, was built on Carnegie Mellon University's Mach microkernel and 4.3BSD (later upgraded to FreeBSD in the Mac OS X days).

Another way of decreasing the amount of time needed to embark on such an effort is to take advantage of computer science research that was not available in previous decades.  For example, Viewpoints Research Institute worked on a project named STEPS that sought to dramatically reduce the amount of lines of code necessary to write a full-fledged operating system by using domain-specific languages to write various subsystems.  An implementation of an operating system inspired by the STEPS project may encourage the rapid development of useful applications for it in a style similar to STEPS, thus potentially revolutionizing software development.

I've been thinking a lot about another canceled Apple project from the 1990s called OpenDoc, which was an attempt to make GUI application development more component-based rather than monolithic, which is similar to the Unix philosophy of using small utilities that interact with each other using pipes and I/O redirection.  The ultimate realization of a component-based GUI would be the Smalltalk environment from Xerox PARC, where everything in the environment is an object that can be manipulated by other objects in the system.  I read an insightful comment from the Hacker News forum (https://news.ycombinator.com/item?id=13573373) that states that the Linux desktop might have been more competitive had it embraced an OpenDoc-like style of component-based software instead of trying to fight Microsoft, Apple, Adobe, and other major software companies head on by building large, monolithic software packages like LibreOffice and GIMP.

The challenge with component-based GUIs, though, is maintaining a common standard of UI conventions across components.  UI consistency across applications is one of the strongest suits of macOS, and this is also true of its ancestors: both the classic Mac OS and NeXTSTEP.  For example, although the pipes-and-redirection approach of Unix command-line utilities works very well, unfortunately there isn't a lot of consistency between Unix tools, with argument flags often differing between utilities despite having similar meanings (for example, whether to use -r or -R to recursively search a directory tree depends on the tool).

Despite these challenges, I believe the time is ripe for a polished desktop operating system that serves as a competitor to macOS, Windows 10, and desktop Linux distributions.  This OS should attract users who are dissatisfied with today's current OS offerings and who desire consistency, usability, and reliability.

Saturday, September 20, 2014

I'm Back!

Hello, readers!  I've decided to resume blogging!  A lot has changed during the past three years that I haven't been updating this blog.  However, many things are still the same.  I am still a grad student, hopefully finishing up within the next 21 months.  I am still studying Japanese, although my progress has regrettably slowed due to the demands of grad school.  And I am still highly interested in the things that I have posted about in the past, including old Macs.  In fact, I'm happy to announce that I just bought a NeXTstation Turbo Color on eBay, which I'll be posting about in upcoming weeks.

Looking forward to writing again!

Monday, November 14, 2011

My Japanese Learning Plan

This upcoming December will mark the 12th anniversary of my beginning studying the Japanese language. I started learning Japanese back in December 1999. It's been nearly twelve years later, and I am unfortunately still not fluent in the language. To make a long story short, I actively studied the language between 1999 and 2005 and even attended a Japanese language school named Sakura Gakuen from 2003 to 2005. However, I took a sort of break during my undergraduate career at Cal Poly due to the demands of my coursework. But, upon getting an offer to do an internship at Fujitsu Labs in Japan, I started to take Japanese much more seriously. My Japanese skills improved dramatically during my time in Japan in 2010, and since I've returned to America I have spent a good chunk of my spare time studying Japanese vocabulary and kanji, as well as watching Japanese dramas and movies and also browsing Japanese websites (with the help of Rikaikun).

My goal is to become fluent in Japanese in 2014, which is around the time I should be finished with my PhD program. I am interested in working in Japan after I graduate, either in an industrial research lab or perhaps at a Japanese university (although I have a lot to learn about how academia works in Japan). Of course, I would need to be fluent in Japanese in order to qualify for a full-time research position out there. Suppose I become a professor at a Japanese university, for example. I would need to be fluent in Japanese in order to convey the course material effectively to my students.

Below is my study plan for the foreseeable future (not in any particular order):
  • Finish Remembering the Kanji I, which is a book that covers the basic 1,945 kanji taught in Japanese public schools, as well as some additional characters.
  • Study the "Core 6000" deck, which is a deck that consists of the 6,000 most commonly used Japanese words. I am almost done with studying the Core 2000 deck, which is the top 2,000 of these words (I only have about ~250 words remaining in my deck; I should be finished studying it next week).
  • Study All About Particles, The Handbook of Japanese Verbs, and A Dictionary of Intermediate Japanese Grammar. After I pass JLPT Level N2, I plan on purchasing A Dictionary of Advanced Japanese Grammar and studying it.
  • Read the stack of manga, magazines, novels, and other Japanese books that I bought while I lived in Japan.
  • Continue watching more Japanese movies and dramas.
  • Study for the JLPT. I plan to take Level N2 of the JLPT in December 2012, and Level N1 of the JLPT in December 2013.
  • Take a trip to Japan on vacation sometime in 2013 (okay, so this isn't exactly "studying" per se, but I will get a chance to use my Japanese again).
Hopefully this works out!

Wednesday, May 25, 2011

Words with Similar Meanings in Japanese

Right now I am studying the Core 2000 Japanese vocabulary list, which consists of the 2,000 most commonly used words in Japanese. I have been working my way through the vocabulary list for almost a month; I spend about 10-20 minutes or so a day studying the list via a flash card program called Anki, which is an excellent program for studying Japanese (or any other language for that matter). While many of the words that I've encountered are words that I was already very familiar with, there are other words that I did not know until I encountered them when studying the vocabulary list. Right now I have gotten through the first 513 words in the collection; many of those words I am now comfortable with. I should be finished studying the word list by the end of the summer.

One very interesting thing I discovered through my studies is that Japanese has a lot of words that are very similar to other words, but have a slight variation in meaning. For example, back when I was at Fujitsu, I learned the difference between 完了 and 終了, which both mean "to finish" but have slightly different connotations (the former implies that a task was completed, while the latter implies that something ended [but not necessarily completed], e.g., プロジェクトを完了しました [I completed the project] and プログラムが終了しました。[The program ended]).

Here are some additional groups that I noticed:
考える (to think, consider) vs. 思う (to think) vs. 検討する (to consider)
仕事 (work, job) vs. 作業 (work)
完了 vs. 完成 (both meaning "to finish, complete")
去年 vs. 昨年 (both meaning "last year")
変える vs. 変わる vs. 変化する (all meaning "to change")
大統領 vs. 社長 (both meaning "president")
開く 「あく」 vs. 開く 「ひらく」 (both meaning "to open"; notice that they are written exactly the same but pronounced differently)
行く 「いく」 vs. 行く 「ゆく」 (both meaning "to go"; same situation as above)
見せる vs. 示す (both meaning "to show")
閉める vs. 閉まる vs. 閉じる (all meaning "to close, to shut")
必要する vs. 要る (both meaning "to need")

It would be very interesting to see the differences between these words.