Main

October 14, 2022

Signaled

wendyg_railway_signal_tracks_crossing-370.jpgA while back, I was trying to get a friend to install the encrypted messaging app Signal.

"Oh, I don't want another messaging app."

Well, I said, it's not *another* messaging app. Use it to replace the app you currently use for texting (SMS) and it will just sit there showing you your text messages. But whenever you encounter another Signal user those messages will be encrypted. People sometimes accepted this; more often, they wanted to know why I couldn't just use WhatsApp, like their school group, tennis club, other friends... (Well, see, it may be encrypted, but it's still owned by the Facebook currently known as Meta.)

This week I learned that soon I won't be able to make this argument any more, because...Signal will be dropping SMS support for Android users sometime in the next few months. I don't love either the plan or the vagueness of its timing. (For reasons I don't entirely understand, this doesn't apply to the nether world of iPhone users.)

The company's blog posting lists several reasons. Apparently the app's SMS integration is confusing to many users, who are unclear about when their messages are encrypted and when they're not. Whether this is true is being disputed in the related forum thread discussing this decision. On the bah! side is "even my grandmother can use it" (snarl) and on the other the valid evidence of the many questions users have posted about this over the years in the support forums. Maybe solvable with some user interface tweaks?

Second, the pricing differential between texting and Signal messages, which transit the Internet as data, has reversed since Signal began. Where data plans used to be rare and expensive, and SMS texts cheap or bundled with phone service, today data plans are common, and SMS has become expensive in some parts of the world. There, the confusion between SMS and Signal messaging really matters. I can't argue with that except to note that equally it's a problem that does *not* apply in many countries. Again, perhaps solvable with user settings...but it's fair enough to say that supporting this may not be the best use of Signal's limited resources. I don't have insight into the distribution of Signal's global user base, and users in other countries are likely to be facing bigger risks than I am.

Third is sort of a purity argument: it's inherently contradictory to include an insecure protocol in an app intended to protect security and privacy. "Inconsistent with our values." The forum discussion is split on this. While many agree with this position, many of the rest of us live in a world that includes lots of people who do not use, and do not want to use (see above), Signal, and it is vastly more convenient to have a single messaging app that handles both.

Signal may not like to stress this aspect, but one problem with trusting an encrypted messaging app in the first place is that the privacy and security are only as good as your correspondents' intentions. Maybe all your contacts set their messages to disappear after a week, password-protect and encrypt their message database, and assign every contact an alias. Or, maybe they don't password-protect anything, never delete anything, and mirror the device to three other computers, all of which they leave lying around in public. You cannot know for sure. So a certain level of insecurity is baked into the most secure installations no matter what you do. I don't see SMS as the biggest problem here.

I think this decision is going to pose real, practical problems for Signal in terms of retaining and growing its user base; it surely does not want the app's presence on a phone become governments' watch-this-person flag. At least in Western countries, SMS is inescapable. It would be better if two-factor authentication used a less hackable alternative, but at the moment SMS is the widespread vector of corporate choice. We consumers don't actually get to choose to dump it until they do. A switch is apparently happening very slowly behind the scenes in the form of RCS, which I don't even know if my aged phone supports. In the meantime, Signal becomes the "another messaging app" we began with - and historically, diminished convenience has been one of the biggest blocks to widespread adoption of privacy-enhancing technologies.

Signal's decision raises the possibility that we are heading into a time where texting people becomes far more difficult. It may become like the early days, when you could only text people using the same phone company as you - for example, Apple has yet to adopt RCS. Every new contact will have to start with a negotiation by email or phone: how do I text you? In *addition* to everything else.

The Internet isn't splintering (yet); email may be despised, but every service remains interoperable. But the mobile world looks like breaking into silos. I have family members who don't understand why they can't send me iMessages or FaceTime me (no iPhone?), and friends I can't message unless I want to adopt WhatsApp or Telegram (groan - another messaging app?).

Signal may well be right that this move is a win for security, privacy, and user clarity. But for communication? In *this* house, it's a frustrating regression.

Illustrations: Midjourney's rendering of "railway signal tracks crossing",

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 16, 2022

Coding ethics

boston-dynamics-werobot-2022-370.jpgWhy is robotics hard?

This was Bill Smart's kickoff on the first (workshop) day of this year's We Robot. It makes sense: We Robot is 11 years old, and if robots were easy we'd have them by now. The basic engineering difficulties are things he's covered in previous such workshops: 2021, 2019, 2018, 2016.

More to the point for this cross-the-technicians-with-the-lawyers event: why is making robots "ethical" hard? Ultimately, because the policy has to be translated into computer code, and as Smart and others explain, the translation demands an order of precision humans don't often recognize. Wednesday's workshops explored the gap between what a policy says and what a computer can be programmed to do. For many years, Smart has liked to dramatize this gap by using four people to represent a "robot" and assigning a simple task. Just try picking up a ball with no direct visual input by asking yes/no questions of a voltage-measuring sensor.

This year, in a role-playing breakout group, we were asked to redesign a delivery robot to resolve complaints in a fictional city roughly the size of Seattle. Injuries to pedestrians have risen since delivery robots arrived; the residents of a retirement community are complaining that the robots' occupation of the sidewalks interferes with their daily walks; and one companysends its delivery robot down the street ;past a restaurant while playing ads for its across-the-street competitor.

It's not difficult to come up with ideas for ways to constrain these robots. Ban them from displaying ads. Limit them to human walking speed (which you'll need to specify precisely). Limit the time or space they're allowed to occupy. Eliminate cars and reallocate road space to create zones for pedestrians, cyclists, public tranport, and robots. Require lights and sound to warn people of the robots' movements. Let people ride on the robots. (Actually, not sure how that solves any of the problems presented, but it sounds like fun.)

As you can see from the sample, many of the solutions that the group eventually proposed were only marginally about robot design. Few could be implemented without collaboration with the city, which would have to agree and pay for infrastructure changes or develop policies and regins specifying robot functionality.

This reality was reinfoced in a later exercise, in which Cindy Grimm, Ruth West, and Kristen Thomasen broke us into robot design teams and tasked us to design a robot to solve these complaints reinforced this. Most of the proposals involved reorganizating public space (one group suggested sending package delivery robots through the sewer system rather than on public streets and sidewalks), sometimes at considerable expense. Our group, concerned about sustainability, wanted the eventual robot made out of 3D printed engineered wood, but hit physical constraints when Grimm pointed out that our comprehensive array of sensors wouldn't fit on the small form factor we'd picked - and would be energy-intensive. No battery life.

The deeper problem we raised: why use robots for this at all? Unless you're a package delivery company seeking to cut labor costs, what's the benefit over current delivery systems? We couldn't think of one. With Canadian journalist Paris Marx's recent book on autonomous vehicles , Road to Nowhere fresh in my mind, however, the threat to publc ownership of the sidewalk seemed real.

The same sort of real problem surfaced in discussions of a different problem, based on Paige Tutosi's winning entry in a recent roboethics competition. In this exercise, we were given three short lists: rooms in a house, people who live in the house, and objects around the house. The idea was to come up with rules for sending the objects to individuals that could be implemented in computer code for its robot servant. In an example ruleset, no one can order the robot to send a beer to the baby or chocolate to the dog.

My breakout group quickly got stuck in contemplating the possible power dynamics and relationships in the house. Was the "mother" the superuser who operated in God mode? Or was she an elderly dementia patient who lived with her superuser daughter, her daughter's boyfriend, and their baby? Then someone asked the killer question: "Who is paying for the robot?" People whose benefits payments arrive on prepay credit cards with government-designed constraints on their use could relate.

The summary reports from the other groups revealed a significant split between those who sought to build a set of rules that specified what was forbidden (comparable to English or American law) and those who sought to build a set of rules that specified what was permitted (more like German law).

For the English approach, you have to think ahead of time of all the things that could go wrong and create rules to prevent them. This is by far the easier approach - easier to code, and safer for robot manufacturers seeking to limit their liability. Robots' capabilities will default to strictly limited to "known-safe".

The fact of this split suggested that at heart developing "robot ethics" is recapitulating all of legal history back to first principles. Viewed that way, robots are dangerous. Not because they are likely to attack us - but because they can be the vector for making moot, in stealth, by inches, and to benefit their empowered commissioners, our entire framework of human rights and freedoms.


Illustrations: Boston Dynamics' canine robot visits We Robot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 9, 2022

The lost penguin

Little_penguin_(Eudyptula_minor)_at_Kwinana_Beach,_September_2021_13-370.jpgOne of the large, ignored problems of cybersecurity is that every site, every supplier, ever software coder, every hardware manufacturer makes decisions as if theirs were the only rules you ever have to observe.

The last couple of weeks I've been renewing my adventures with Linux, which started in 2016, and continued later that year and again in 2018 and, undocumented, in 2020. The proximate cause this time was the release of Ubuntu 22.04. Through every version back to 14.04 I've had the same longrunning issue: the displays occasionally freeze for no consistent reason, and the only way out is a cold boot. Would this time be the charm?

Of course, the first thing that happened was that trying to upgrade the system in place failed. This isn't my first rodeo (see 2016, part II), and so I know that unpicking and troubleshooting a failure often takes more than doing a clean install. I had an empty hard drive at the ready...

All the good things I said about Ubuntu installation in 2018 are still true: Canonical and the open source community have done a very good job of building a computer-in-a-box. It installed and it worked, although I hate the Gnome desktop it ships with.

Except.

Everything is absolutely fine unless, as I whined in 2018, you want to connect to some Windows machines. For that, you must download and install Samba. When it doesn't work, Samba is horrible, and grappling with it revives all my memories of someone telling me, the first time I heard of Linux, that "Linux is as user-friendly as a cornered rat."

Last time round, I got the thing working by reading lots of web pages and adding more and more stuff to the config file until it worked. This was not necessarily a good thing, because in the process I opened more shares than I needed to, and because the process was so painful I never felt like going back to put in a few constraints. Why would I care? I'm one person with a very small (wired) computer network, and it's OK if the machines see more of each other's undergaments than is strictly necessary.

Since then, the powers that code have been diligently at work to make the system more secure. So to stop people from doing what I did, they have tweaked Samba so that by default it's not possible to share your Home directory. Their idea is that you'll have a Public directory that is the only thing you share, and any file that's in it is there because you made a conscious decision to put it there.

I get the thinking, but I don't want to do things their way, I want to do things my way. And my way is that I want to share three directories inside the Home directory. Needless to say, I am not the only recalcitrant person, and so people have published three workarounds. I did them all. Result: my Windows machines can now access the directories I wanted to share on the Ubuntu machine. And: the Ubuntu machine is less secure for a value of security that isn't necessarily helpful in a tiny wired home network.

That was only half the problem.

Ubuntu can see there's a Windows network, and it will even sometimes list the machines correctly, but ask it to access one of them, and it draws a blank. Almost literally a blank: it just hangs there going, "Opening >machine name<" until you give up and hit Cancel. Someone has wrapped a towel around its head, apparently thinking, like the Bugblatter Beast of Traal, that if it can't see you, you can't see it. I now see that this is exactly the same analogy, in almost the identical words, that I used in 2018. I swear I typed it all new this time.

That someone appears to be Microsoft. The *other* problem, it turns out, is that Microsoft also wanted to improve security, and so it's made it harder to open Windows 10 machines to networking with interlopers such as people who run Ubuntu. I forget now the incantation I had to wave over it to get it to cooperate, but the solution I found only worked to admit the Ubuntu shares, not open up the Windows ones.

Seems to me there's two problems here.

One is the widening gap between consumer products and expert computing. The reality of mass adoption confirms that consumer computing has in fact gotten much easier over time. But the systems we rely on are more sophisticated and complex, and they're meeting more sophisticated and complex needs - and doing anything outside that mainstream has accordingly become much harder, requiring a lot of knowledge, training, patience, and expertise. I fall right into that gap (which is why my website has no Javascript and I'm afraid to touch the blogging software that powers net.wars). In 2016, Samba just worked.

The other, though, is a problem I've touched on before: decisions about product security are made in silos without considering the wider ecosystem and differing contexts in which they're used. Microsoft or Apple's answer to the sort of connection problem I have is buy our stuff. The open source community's reaction isn't much different. Which leaves me....wanting to bang all their heads together.


Illustrations: Little penguin swimming (via Calistemon at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 30, 2021

The tonsils of the Internet

Screenshot from 2021-04-30 13-02-46.pngLast week the US Supreme Court decided the ten-year-old Google v. Oracle copyright case. Unlike anyone in Jarndyce v. Jarndyce, which bankrupted all concerned, Google will benefit financially, and in other ways so will the rest of us.

Essentially, the case revolved around whether Google violated Oracle's copyright by copying about 11,500 lines of the software code (out of millions) that makes up the Java platform, part of the application programming interface. Google claimed fair use. Oracle disagreed.

Tangentially: Oracle owns Java because in 2010 it bought its developer, Sun Microsystems, which open-sourced the software in 2006. Google bought Android in 2005; it, too, is open source. If the antitrust authorities had blocked the Oracle acquisition, which they did consider, there would have been no case.

The history of disputes over copying and interoperability case goes back to the 1996 case Lotus v. Borland, in which Borland successfully argued that copying the way Lotus organized its menus was copying function, not expression. By opening the way for software programs to copy functional elements (like menus and shortcut keys), the Borland case was hugely important. It paved the way for industry-wide interface standards and thereby improved overall usability and made it easier for users to switch from one program to another if they wanted to. This decision, similarly, should enable innovation in the wider market for apps and services.

Also last week, the US Congress conducted both the latest in the series of antitrust hearings and interrogated Lina Khan, who has been nominated for a position at the Federal Trade Commission. Biden's decision to appoint her, as well as Tim Wu to the National Economic Council, has been taken as a sign of increasing seriousness about reining in Big Tech.

The antitrust hearing focused on the tollbooths known as app stores; in his opening testimony, Mark Cooper, director of research at the Consumer Federations of America, noted that the practices described by the chair, Senator Amy Klobuchar (D-MN) were all illegal in the Microsoft case, which was decided in 1998. A few minutes later, Horacio Gutierrez, Spotify's head of global affairs and chief legal officer, noted that "even" Microsoft never demanded a 30% commission from software developers to run on its platform".

Watching this brought home the extent to which the mobile web, with its culture of walled gardens and network operator control, has overwhelmed the open web we Old Net Curmudgeons are so nostalgic about. "They have taken the Internet and moved it into the app stores", Jared Sine told the committee, and that's exactly right. Opening the Internet back up requires opening up the app stores. Otherwise, the mobile web will be little different than CompuServe, circa 1991.

BuzzFeed technology reporter Ryan Mac posted on Twitter the anonymous account of a just-quit Accenture employee's account of their two and a half years as a content analyst for Facebook. The main points: the work is a constant stream of trauma; there are insufficient breaks and mental health support; the NDAs they are forced to sign block them from turning to family and friends for help; and they need the chance to move around to other jobs for longer periods of respite. "We are the tonsils of the Internet," they wrote. Medically, we now know that the tonsils that doctors used to cheerfully remove play an important role in immune system response. Human moderation is essential if you want online spaces to be tolerably civil; machines simply aren't good enough, and likely never will be, and abuse appears to be endemic in online spaces above a certain size. But just as the exhausted health workers who have helped so many people survive this pandemic should be viewed as a rare and precious resource instead of interchangeable parts whose distress the anti-lockdown, no-mask crowd are willing to overlook, the janitors of the worst and most unpleasant parts of the Internet need to be treated with appropriate care.

The power differential, the geographic spread, their arms-length subcontractor status, and the technology companies' apparent lack of interest combine to make that difficult. Exhibit B: Protocol reports that contract workers in Google's data centers are required to leave the company for six months every two years and reapply for their jobs, apparently just so they won't gain the rights of permanent employees.

In hopes of change, many were watching the Bessemer, Alabama Amazon warehouse workers' vote on unionizing. Now, the results are in: 1,798 to 738 against. You would think that one thing that could potentially help these underpaid, traumatized content moderators - as well as the drivers, warehouse workers, and others who are kept at second-class arm's length from the technology companies who so diligently ensure they don't become full employees - is a union. Because of the potential impact on the industry at large, many were watching closely, both the organizating efforts and Amazon's drive to oppose them.

Nonetheless, this isn't over. Moves toward unionizing have been growing for years in pockets all over the technology industry, and eventually it will be inescapable. We're used to thinking about technology companies' power in terms of industry consolidating and software licensing; workers are the ones who most directly feel the effects.


Illustrations: The chancellor (Ian Richardson), announcing the end of Jarndyce and Jarndyce in the BBC's 2005 adaptation of Bleak House.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 14, 2020

Revenge of the browser wars

Netscape-1.0N.pngThis week, the Mozilla Foundation announced major changes. As is the new norm these days, Mozilla is responding to a problem that existed BCV (before coronavirus) but has been exposed, accelerated, and compounded by the pandemic. But the response sounds grim: approximately a quarter of the workforce to be laid off and a warning that the company needs to find new business models. Just a couple of numbers explain the backdrop: according to Statcounter, Firefox's second-position share of desktop/laptop browser usage has dropped to 8.61% behind Chrome at 69.55%. On mobile and tablets, where the iPhone's Safari takes a large bite out of Chrome's share, Firefox doesn't even crack 1%. You might try to trumpify those percentages by suggesting it's a smaller share but a larger user population, but unfortunately no; at CNet, Stephen Shankland reports that usage is shrinking in raw numbers, too, down to 210 million monthly users from 300 million in 2017.

Yes, I am one of those users.

In its 2018 annual report and 2018 financial statement (PDF), Mozilla explains that most of its annual income - $430 million - comes from royalty deals with search engines, which pay Firefox to make them the default (users can change this at will). The default varies across countries: Baidu (China), Yandex (Russia, Belarus, Kazakhstan, Turkey, and Ukraine), and Google everywhere else, including the US and Canada. It derives a relatively small amount - $20 million or so in total - of additional income from subscriptions, advertising, donations and dividends and interest on the investments where it's parked its capital.

The pandemic has of course messed up everyone's financial projections. In the end, though, the underlying problem is that long-term drop in users; fewer users must eventually generate fewer search queries on which to collect royalties. Presumably this lies behind Mozilla's acknowledgment that it needs to find new ways to support itself - which, the announcement also makes clear, it has so far struggled to do.

The problem for the rest of us is that the Internet needs Firefox - or if not Firefox itself, another open source browser with sufficiently significant cloud to keep the commercial browsers and their owners honest. At the moment, Mozilla and Firefox are the only ones in a position to lead that effort, and it's hard to imagine a viable replacement.-

As so often, the roots of the present situation go back to 1995, when - no Google then and Apple in its pre-Jobs-return state - the browser kings were Microsoft's Internet Explorer and Netscape Navigator, both seeking world wide web domination. Netscape's 1995 IPO is widely considered the kickoff for the dot-com boom. By 1999, Microsoft was winning and then high-flying AOL was buying Netscape. It was all too easy to imagine both building out proprietary protocols that only their browsers could read, dividing the net up into incompatible walled gardens. The first versions of what became Firefox were, literally, built out of a fork of Netscape whose source code was released before the AOL acquisition.

The players have changed and the commercial web has grown explosively, but the danger of slowly turning the web into a proprietary system has not. Statcounter has Google (Chrome) and Apple (Safari) as the two most significant players, followed by Samsung Internet (on mobile) and Microsoft's Edge (on desktop), with a long tail of others including Opera (which pioneered many now-common features), Vivaldi (built by the Opera team after Telenor sold it to a Chinese consortium), and Brave, which markets itself as a privacy browser. All these browsers have their devoted fans, but they are only viable because websites observe open standards. If Mozilla can't find a way to reverse Firefox's user base shrinkage, web access will be dominated by two of the giant companies that two weeks ago were called in to the US Congress to answer questions about monopoly power. Browsers are a chokepoint they can control. I'd love to say the hearings might have given them pause, but two weeks later Google is still buying Fitbit, Apple and Google have removed Fortnite from the app store for violating its in-app payment rules, and Facebook has launched Tiktok clone Instagram Reels.

There is, at the moment, no suggestion that either Google or Apple wants to abuse its dominance in browser usage. If they're smart, they'll remember the many benefits of the standards-based approach that built the web. They may also remember that in 2009 the threat of EU fines led Microsoft to unbundle its Internet Explorer browser from Windows.

The difficulty of finding a viable business model for a piece of software that millions of people use is one of the hidden costs of the Internet as we know it. No one has ever been able to persuade large numbers of users to pay for a web browser; Opera tried in the late 1990s, and wound up switching first to advertising sponsorship and then, like Mozilla, to a contract with Google.

Today, Catalin Cimpanu reports at ZDNet that Google and Mozilla will extend their deal until 2023, providing Mozilla with perhaps $400 million to $500 million a year. Assuming it goes through as planned, it's a reprieve - but it's not a solution - as Mozilla, fortunately, seems to know.

Illustrations: Netscape 1.0, in 1994 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2020

Software inside

Hedy_Lamarr_in_The_Conspirators_2.jpgIn 2011, Netscape creator-turned-venture capitalists Marc Andreesen argued that software is eating the world. Andreesen focused on a rather narrow meaning of "world" - financial value. Amazon ate Borders' lunch; software fuels the success of Wal-Mart, Fedex, airlines, and financial services. Like that.

There is, however, a more interesting sense in which software is eating the world, and that's its takeover of what we think of as "hardware". A friend tells me, for example, that part of the pleasure he gets from driving a Tesla is that its periodic software updates keep the car feeling new, so he never looks enviously at the features on later models. Still, these updates do at least sound like traditional software. The last update of 2019, for example, included improved driver visualization, a "Camp Mode" to make the car more comfortable to spend the night in, and other interface improvements. I assume something as ordinarily useful as map updates is too trivial to mention.

Even though this means a car is now really a fancy interconnected series of dozens of computer networks whose output happens to be making a large, heavy object move on wheels. I don't have trouble grasping the whole thing, not really. It's a control system.

Much more confounding was the time, in late 1993. when I visited Demon Internet, then a startup founded to offer Internet access to UK consumers. Like quite a few others, I was having trouble getting connected via the Demon's adapted version of KA9Q, connection software written for packet radio. This was my first puzzlement: how could software for "packet radio" (whatever that was) do anything on a computer? That was nothing to my confusion when Demon staffer Mark Turner explained to me that the computer could parse the stream of information coming into it and direct the results to different applications simultaneously. At that point, I'd only ever used online services where you could only do one thing at a time, just as you could only make one phone call at a time. I remember finding the idea of one data stream servicing many applications at once really difficult to grasp. How did it know what went where?

That is software, and it's what happened in the shift from legacy phone networks' circuit switching to Internet-style packet switching.

I had a similar moment of surreality when first told about software-defined radio. A radio was a *thing*. How could it be software? By then I knew about spread spectrum, invented by the actress Hedy Lamarr and pianist George Antheil to protect wartime military conversations from eavesdropping, so it shouldn't have seemed as weird as it did.

And so to this week, when, at the first PhD Cyber Security Winter School, I discovered programmable - - that is, software-defined - networks. Of course networks are controlled by software already, but at the physical layer it's cables, switches, and routers. If one of those specialized devices needs to be reconfigured you have to do it locally, device by device. Now, the idea is more generic hardware that can be reprogrammed on the fly, enabling remote - and more centralized and larger-scale - control. Security people like the idea that a network can both spot and harden itself against malicious traffic much faster. I can't help being suspicious that this new world will help attackers, too, first by providing a central target to attack, and second because it will be vastly more complex. Authentication and encryption will be crucial in an environment where a malformed or malicious data packet doesn't just pose a threat to the end user who receives it but can reprogram the network. Helpfully, the NSA has thought about this in more depth and greater detail. They do see centralization as a risk, and recommend a series of measures for protecting the controller; they also highlight the problems increased complexity brings.

As the workshop leader said, this is enough of a trend for Cisco, and Intel to embrace it; six months ago, Intel paid $5 billion for Barefoot Networks, the creator of P4, the language I saw demonstrated for programming these things.

At this point I began wondering if this doesn't up-end the entire design philosophy of the Internet, which was to push all the intelligence out to the edges, The beginnings of this new paradigm, active networking, appeared around the early 2000s. The computer science literature - for example, Activating Networks (PDF), by Jonathan M. Smith, Kenneth L. Calvert, Sandra L. Murphy, Hilarie K. Orman, and Larry L. Peterson, and Active Networking: One View of the Past, Present, and Future (PDF), by Smith and Scott M. Nettles - plots out the problems of security and complexity in detail, and considers the Internet and interoperability issues. The Road to SDN: An Intellectual History of Programmable Networks, by Nick Feamster, Jennifer Rexford, and Ellen Zegura, recapitulates the history to date.

My real question, however, is one I suspect has received less consideration: will these software-defined networks make surveillance and censorship easier or harder? Will they have an effect on the accessibility of Internet freedoms? Are there design considerations we should know about? These seem like reasonable questions to ask as this future hurtles toward us.

Illustrations: Hedy Lamarr, in The Conspirators, 1944..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2020

The forever bug

Bug_de_l'an_2000.jpgY2K is back, and this time it's giggling at us.

For the past few years, there's been a growing drumbeat on social media and elsewhere to the effect that Y2K - "the year 2000 bug" - never happened. It was a nothingburger. It was hyped then, and anyone saying now it was a real thing is like, ok boomer.

Be careful what old averted messes you dismiss; they may come back to fuck with you.

Having lived through it, we can tell you the truth: Y2K *was* hyped. It was also a real thing that was wildly underestimated for years before it was taken as seriously as it needed to be. When it finally registered as a genuine and massive problem, millions of person-hours were spent remediating software, replacing or isolating systems that couldn't be fixed, and making contingency and management plans. Lots of things broke, but, because of all that work, nothing significant on a societal scale. Locally, though, anyone using a computer at the time likely has a personal Y2K example. In my own case, an instance of Quicken continued to function but stopped autofilling dates correctly. For years I entered dates manually before finally switching to GnuCash.

The story, parts of which Chris Stokel-Walker recounts at New Scientist, began in 1971, when Bob Bemer published a warning about the "Millennium Bug", having realized years earlier that the common practice of saving memory space by using two digits instead of four to indicate the year was storing up trouble. He was largely ignored, in part, it appeared, because no one really believed the software they were writing would still be in use decades later.

It was the mid-1990s before the industry began to take the problem seriously, and when they did the mainstream coverage broke open. In writing a 1997 Daily Telegraph article, I discovered that mechanical devices had problems, too.

We had both nay-sayers, who called Y2K a boondoggle whose sole purpose was to boost the computer industry's bottom line, and doommongers, who predicted everything from planes falling out of the sky to total societal collapse. As Damian Thompson told me for a 1998 Scientific American piece (paywalled), the Millennium Bug gave apocalyptic types a *mechanism* by which the crash would happen. In the Usenet newsgroup comp.software.year-2000, I found a projected timetable: bank systems would fail early, and by April 1999 the cities would start to burn... When I wrote that society would likely survive because most people wanted it to, some newsgroup members called me irresponsible, and emailed the editor demanding he "fire this dizzy broad". Reconvening ten years later, they apologized.

Also at the extreme end of the panic spectrum was the then-head of Deutsche Bank, Ed Yardeni, who repeatedly predicted that Y2K would cause a worldwide recession; it took him until 2002 to admit his mistake, crediting the industry's hard work.

It was still a real problem, and with some workarounds and a lot of work most of the effects were contained, if not eliminated. Reporters spent New Year's Eve at empty airports, in case there was a crash. Air travel that night, for sure, *was* a nothingburger. In that limited sense, nothing happened.

Some of those fixes, however, were not so much fixes as workarounds. One of these finessed the rollover problem by creating a "window" and telling systems that two-digit years fell between 1920 and 2020, rather than 1900 and 2000. As the characters on How I Met Your Mother might say: "It's a problem for Future Ted and Future Marshall. Let's let those guys handle it."

So, it's 2020, we've hit the upper end of the window, the bug is back, and Future Ted and Future Marshall are complaining about Past Ted and Past Marshall, who should have planned better. But even if they had...the underlying issue is temporary thinking that leads people to still - still, after all these decades - believe that today's software will be long gone 20 years from now and therefore they need only worry about the short term of making it work today.

Instead, the reality is, as we wrote in 2014, that software is forever.

That said, the reality is also that Y2K is forever, because if the software couldn't be rewritten to take a four-digit year field in 1999 it probably can't be today, either. Everyone stresses the need to patch and update software, but a lot - for an increasing value of "a lot" as Internet of Things devices come on the market with no real idea of how long they were be in service - of things can't be updated for one reason or another. Maybe the system can't be allowed to go down; maybe it's a bespoke but crucial system whose maintainers are long gone; maybe the software is just too fragile and poorly documented to change; maybe old versions propagated all over the place and are laboring on in places where they've simply been forgotten. All of that is also a reason why it's not entirely fair for Stokel-Walker to call the old work "a lazy fix". In a fair percentage of cases, creating and moving the window may have been the only option.

But fret ye not. We will get through this. And then we can look forward to 2038, when the clocks run out in Linux. Future Ted and Future Marshall will handle it.


Illustrations: Millennium Bug manifested at a French school (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 27, 2019

Runaway

shmatikov.jpgFor me, the scariest presentation of 2019 was a talk given by Cornell University professor Vitaly Shmatikov about computer models. It's partly a matter of reframing the familiar picture; for years, Bill Smart and Cindy Grimm have explained to attendees at We Robot that we don't necessarily really know what it is that neural nets are learning when they're deep learning.

In Smart's example, changing a few pixels in an image can change the machine learning algorithm's perception of it from "Abraham Lincoln" to "zebrafish". Misunderstanding what's important to an algorithm is the kind of thing research scientist Janelle Shane exploits when she pranks neural networks and asks them to generate new recipes or Christmas carols from a pile of known examples. In her book, You Look Like a Thing and I Love You, she presents the inner workings of many more examples.

All of this explains why researchers Kate Crawford and Trevor Paglen's ImageNet Roulette experiment tagged my my Twitter avatar as "the Dalai Lama". I didn't dare rerun it, because how can you beat that? The experiment over, would-be visitors are now redirected to Crawford's and Paglen's thoughtful examination of the problems they found in the tagging and classification system that's being used in training these algorithms.

Crawford and Paglen write persuasively about the world view captured by the inclusion of categories such as "Bad Person" and "Jezebel" - real categories in the Person classification subsystem. The aspect has gone largely unnoticed until now because conference papers focused on the non-human images in ten-year-old ImageNet and its fellow training databases. Then there is the *other* problem, that the people's pictures used to train the algorithm were appropriated from search engines, photo-sharing sites such as Flickr, and video of students walking their university campuses. Even if you would have approved the use of your forgotten Flickr feed to train image recognition algorithms, I'm betting you wouldn't have agreed to be literally tagged "loser" so the algorithm can apply that tag later to a child wearing sunglasses. Why is "gal" even a Person subcategory, still less the most-populated one? Crawford and Paglen conclude that datasets are "a political intervention". I'll take "Dalai Lama", gladly.

Again, though, all of this fits with and builds upon an already known problem: we don't really know which patterns machine learning algorithms identify as significant. In his recent talk to a group of security researchers at UCL, however, Shmatikov, whose previous work includes training an algorithm to recognize faces despite obfuscation, outlined a deeper problem: these algorithms "overlearn". How do we stop them from "learning" (and then applying) unwanted lessons? He says we can't.

"Organically, the model learns to recognize all sorts of things about the original data that were not intended." In his example, in training an algorithm to recognize gender using a dataset of facial images, alongside it will learn to infer race, including races not represented in the training dataset, and even identities. In another example, you can train a text classifier to infer sentiment - and the model also learns to infer authorship.

Options for counteraction are limited. Censoring unwanted features doesn't work because a) you don't know what to censor; b) you can't censor something that isn't represented in the training data; and c) that type of censoring damages the algorithm's accuracy on the original task. "Either you're doing face analysis or you're not." Shmatikov and Congzheng Song explain their work more formally in their paper Overlearning Reveals Sensitive Attributes.

"We can't really constrain what the model is learning," Shmatikov told a group of security researchers at UCL recently, "only how it is used. It is going to be very hard to prevent the model from learning things you don't want it to learn." This drives a huge hole through GDPR, which relies on a model of meaningful consent. How do you consent to something no one knows is going to happen?

What Shmatikov was saying, therefore, is that from a security and privacy point of view, the typical question we ask, "Did the model learn its task well?", is too limited. "Security and privacy people should also be asking: what else did the model learn?" Some possibilities: it could have memorized the training data; discovered orthogonal features; performed privacy-violating tasks; or incorporated a backdoor. None of these are captured in assessing the model's accuracy in performing the assigned task.

My first reaction was to wonder whether a data-mining company like Facebook could use Shmatikov's explanation as an excuse when it's accused of allowing its system to discriminate against people - for example, in digital redlinining. Shmatikov thought not, at least, not more than their work helps people find out what their models are really doing.

"How to force the model to discover the simplest possible representation is a separate problem worth invdstigating," he concluded.

So: we can't easily predict what computer models learn when we set them a task involving complex representations, and we can't easily get rid of these unexpected lessons while retaining the usefulness of the models. I was not the only person who found this scary. We are turning these things loose on the world and incorporating them into decision making without the slightest idea of what they're doing. Seriously?


Illustrations: Vitaly Shmatikov (via Cornell).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2018

Lost in transition

End_all_DRM_in_the_world_forever,_within_a_decade.jpg"Why do I have to scan my boarding card?" I demanded loudly of the machine that was making this demand. "I'm buying a thing of milk!"

The location was Heathrow Terminal 5. The "thing of milk" was a pint of milk being purchased with a view to a late arrival in a continental European city where tea is frequently offered with "Kafeesahne", a thick, off-white substance that belongs with tea about as much as library paste does.

A human materialized out of nowhere, and typed in some codes. The transaction went through. I did not know you could do that.

The incident sounds minor - yes, I thanked her - but has a real point. For years, UK airport retailers secured discounts for themselves by demanding to scan boarding cards at the point of purchase while claiming the reason was to exempt the customers from VAT when they are taking purchases out of the country. Just a couple of years ago the news came out: the companies were failing to pass the resulting discounts on to customers and simply pocketing the VAT. Legally, you are not required to comply with the request.

They still ask, of course.

If you're dealing with a human retail clerk, refusing is easy: you say "No" and they move on to completing the transaction. The automated checkout (which I normally avoid), however is not familiar with No. It is not designed for No. No is not part of its vocabulary unless a human comes along with an override code.

My legal right not to scan my boarding card therefore relies on the presence of an expert human. Take the human out of that loop - or overwhelm them with too many stations to monitor - and the right disappears, engineered out by automation and enforced by the time pressure of having to catch a flight and/or the limited resource of your patience.

This is the same issue that has long been machinified by DRM - digital rights management - and the locks it applies to commercially distributed content. The text of Alice in Wonderland is in the public domain, but wrap it in DRM and your legal rights to copy, lend, redistribute, and modify all vanish, automated out with no human to summon and negotiate with.

Another example: the discount railcard I pay for once a year is renewable online. But if you go that route, you are required to upload your passport, photo driver's license, or national ID card. None of these should really be necessary. If you renew at a railway station, you pay your money and get your card, no identification requested. In this example the automation requires you to submit more data and take greater risk than the offline equivalent. And, of course, when you use a website there's no human to waive the requirement and restore the status quo.

Each of these services is designed individually. There is no collusion, and yet the direction is uniform.

Most of the discussion around this kind of thing - rightly - focuses on clearly unjust systems with major impact on people's lives. The COMPAS recidivism algorithm, for example, is used to risk-assess the likelihood that a criminal defendant will reoffend. A ProPublica study found that the algorithm tended to produced biased results of two kinds: first, black defendants were more likely than white defendants to be incorrectly rated as high risk; second, white reoffenders were incorrectly classified as low-risk more often than black ones. Other such systems show similar biases, all for the same basic reason: decades of prejudice are baked into the training data these systems are fed. Virginia Eubanks, for example, has found similar issues in systems such as those that attempt to identify children at risk and that appear to see poverty itself as a risk factor.

By contrast, the instances I'm pointing out seem smaller, maybe even insignificant. But the potential is that over time wide swathes of choices and rights will disappear, essentially automated out of our landscape. Any process can be gamed this way.

At a Royal Society meeting last year, law professor Mireille Hildebrandt outlined the risks of allowing the atrophy of governance through the text-driven law that today is negotiated in the courts. The danger, she warned, is that through machine deployment and "judgemental atrophy" it will be replaced with administration, overseen by inflexible machines that enforce rules with no room for contestability, which Hildebrandt called "the heart of the rule of law".

What's happening here is, as she said, administration - but it's administration in which our legitimate rights dissipate in a wave of "because we can" automated demands. There are many ways we willingly give up these rights already - plenty of people are prepared to give up anonymity in financial transactions by using all manner of non-cash payment systems, for example. But at least those are conscious choices from which we derive a known benefit. It's hard to see any benefit accruing from the loss of the right to object to unreasonable bureaucracy imposed upon us by machines designed to serve only their owners' interests.


Illustrations: "Kill all the DRM in the world within a decade" (via Wikimedia.).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 11, 2018

The third penguin

two-angry-penguins.jpgYou never have time to disrupt yourself and your work by updating your computer's software until Bad Things happen and you're forced to find the time you don't have.

So last week the Ubuntu machine's system drive, which I had somehow failed to notice dated to 2012, lost the will to live. I had been putting off upgrading to 64-bit; several useful pieces of software are no longer available in 32-bit versions, such as Signal for Desktop, Free File Sync, and Skype.

It transpired that 18.04 LTS had been released a few days earlier. Latest version means longer until forced to upgrade, right?

The good news is that Ubuntu's ease of installation continues to improve. The experience of my first installation, about two and a half years ago, of trying umpteen things and hoping one would eventually work, is gone. Both audio and video worked first time out, and although I still had to switch video drivers, but I didn't have to search AskUbuntu to do it. Even more than my second installation Canonical has come very, very close to one-click installation. The video freezes that have been plaguing the machine since the botched 16.04 update in 2016 appear to have largely gone.

However, making it easy also makes some things hard. Reason: making it easy means eliminating things that require effort to configure and that might complicate the effortlessness. In the case of 18.04, that means that if you have a mixed network you still have to separately download and configure Samba, the thing that makes it possible for an Ubuntu machine to talk to a Windows machine. I understand this choice, I think: it's reasonable to surmise that the people who need an easy installation are unlikely to have mixed networks, and the people who do have them can cope with downloading extra software. But Samba is just mean.

An ideal installation routine would do something like:
- Ask the names and IP addresses of the machines you want to connect to;
- Ask what directories you want to share;
- Use that information to write the config file;
- Send you to pages with debugging information if it doesn't work.

Of course, it doesn't work like that. I eventually found the page I think helped me most last time. That half-solved the problem, in that the Windows machines could see the Ubuntu machine but not the reverse. As far as I could tell, the Ubuntu machine had adopted the strategy of the Ravenous Bug Blatter Beast of Traal and wrapped a towel around its head on the basis that if it couldn't see them they couldn't see *it*.

Many DuckDuckGo searches later the answer arrived: apparently for 18.04 the decisions was made to remove a client protocol. The solution was to download and install a bit of software called smbclient, which would restore the protocol. That worked.

Far more baffling was the mysterious, apparently random appearance of giant colored graphics in my Thunderbird inbox. All large enough to block numerous subject lines. This is not an easy search to frame, and I've now forgotten the magical combination of words that produced the answer: Ubuntu 18.04 has decorated itself with a colorful set of bright, shiny *emoji*. These, it turns out, you can remove easily. Once you have, the symbols sent to torture you shrink back down to tiny black and white blogs that disturb no one. Should you feel a desperate need to find out what one is, you can copy and paste it into Emojipedia, and there it is: that thing you thought was a balloon was in fact a crystal ball. Like it matters.

I knew going in that Unity, the desktop interface that came with my previous versions of Ubuntu, had been replaced by Gnome, which everyone predicted I would hate.

The reality is that it's never about whether a piece of software is good or bad; it's always about what you're used to. If your computer is your tool rather than your plaything, the thing you care most about is not having to learn too much that's new. I don't mind that the Ubuntu machine doesn't look like Windows; I prefer to have the reminder that it's different. But as much as I'd disliked it at first, I'd gotten used to the way Unity groups and displays windows, the size of the font it used, and the controls for configuring it. So, yes, Gnome annoyed, with its insistence on offering me apps I don't want, tiny grey fonts, wrong-side window controls, and pointless lockscreens that all wanted recofniguration. KDE desktop, which a friend insisted I should try, didn't seem much different. It took only two days to revert to Unity, which is now "community-maintained", polite GNU/Linux-speak for "may not survive for long". Back to some version of normal.

In my view, Ubuntu could still fix some things. It should be easier to add applications to the Startup list. The Samba installation should be automated and offered as an option in system installation with a question like, "Do you need to connect to a Windows machine on your network?" User answers yes or no, Samba is installed or not with a script like that suggested above.

But all told, it remains remarkable progress. I salute the penguin wranglers.


Illustrations: Penguins.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2013

Forcing functions

At last Saturday's OpenTech, perennial grain-of-sand-in-the-Internet-oyster Bill Thompson, in a session on open data, asked an interesting question. In a nod to NTK's old slogan, "They stole our revolution - now we're stealing it back", he asked: how can we ensure that open data supports values of democracy, openness, transparency, and social justice? The Internet pioneers did their best to embed these things into their designs, and the open architecture, software, and licensing they pioneered can be taken without paying by any oppressive government or large company that cares to, Is this what we want for open data, too?

Thompson writes (and, if I remember correctly, actually said, more or less):

...destruction seems like a real danger, not least because the principles on which the Internet is founded leave us open to exploitation and appropriation by those who see openness as an opportunity to take without paying - the venture capitalists, startups and big tech companies who have built their empires in the commons and argue that their right to build fences and walls is just another aspect of 'openness'.

Constraining the ability to take what's been freely developed and exploit it has certainly been attempted, most famously by Richard Stallman's efforts to use copyright law to create software licenses that would bar companies from taking free software and locking it up into proprietary software. It's part of what Creative Commons is about, too: giving people the ability to easily specify how their work may be used. Barring commercial exploitation without payment is a popular option: most people want a cut when they see others making a profit from their work.

The problem, unfortunately, is that it isn't really possible to create an open system that can *only* be used by the "good guys" in "good" ways. The "free speech, not free beer" analogy Stallman used to explain "free software" applies. You can make licensing terms that bar Microsoft from taking GNU/Linux, adding a new user interface, and claiming copyright in the whole thing. But you can't make licensing terms that bar people using Linux from using it to build wiretapping boxes for governments to install in ISPs to collect everyone's email. If you did, either the terms wouldn't hold up in a court of law or it would no longer be free software but instead proprietary software controlled by a well-meaning elite.

One of the fascinating things about the early days of the Internet is the way everyone viewed it as an unbroken field of snow they could mold into the image they wanted. What makes the Internet special is that any of those models really can apply: it's as reasonable to be the entertainment industry and see it as a platform that just needs some locks and laws to improve its effectiveness as a distribution channel as to be Bill Thompson and view it as a platform for social justice that's in danger of being subverted.

One could view the legal history of The Pirate Bay as a worked example, at least as it's shown in the documentary TPB-AFK: The Pirate Bay - Away From Keyboard, released in February and freely downloadable under a Creative Commons license from a torrent site near you (like The Pirate Bay). The documentary has had the best possible publicity this week when the movie studios issued DMCA takedown notices to a batch of sites.

I'm not sure what leg their DMCA claims could stand on, so the most likely explanation is the one TorrentFreak came up with: that the notices are collateral damage. The only remotely likely thing in the documentary to have set them off - other than simple false positives - is the four movie studio logos that appear in it.

There are many lessons to take away from the movie, most notably how much more nuanced the TPB founders' views are than they came across at the time. My favorite moment is probably when Fredrik Tiamo discusses the opposing counsels' inability to understand how TPB actually worked: "We tried to get organized, but we failed every single time." Instead, no boss, no contracts, no company. "We're just a couple of guys in a chat room." My other favorite is probably the moment when Monique Wadsted, Hollywood's lawyer on the case, explains that the notion that young people are disaffected with copyright law is a myth.

"We prefer AFK to IRL," says one of the founders, "because we think the Internet is real."

Given its impact on their business, I'm sure the entertainment industry thinks the Internet is real, too. They're just one of many groups who would like to close down the Internet so it can't be exploited by the "bad guys": security people, governments, child protection campaigners, and so on. Open data will be no different. So, sadly, my answer to Bill Thompson is no, there probably isn't a way to do what he has in mind. Closed in the name of social justice is still closed. Open systems can be exploited by both good and bad guys (for your value of "good" and "bad"); the group exploiting a closed system is always *someone's* bad guy.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly during the week at the net.wars Pinboard - or follow on Twitter.


January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords throughout...you have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 9, 2010

Letter box

In case you thought the iPad was essentially a useless, if appealing, gadget, take heart: it now arguably has a reason to exist in the form of an app, iMean, designed to help autistic children communicate.

The back story: my friend Michael's son, Dan, is 14; his autism means he can't really speak and has motor control difficulties.

"He's somebody who at the age of 12 had a spoken vocabulary of 100 words," says Michael, "though he seemed to have a much greater recognition vocabulary and could understand most of what we said to him, though it was hard to be sure."

That year, 2008, the family went to Texas to consult Soma Mukhopadhyay, who over the space of four days was able to get Dan communicating through multiple-choice. At first, the choices were written on two pieces of paper and Dan would grab one. He rapidly moved on to using a pencil to point at large letters placed in alphabetical order on a piece of laminated cardboard, a process Michael compares to a series of multiple-choice questions with 26 possible answers.

"Before Soma there were no letters, only words. So what he came to realize was that all the words he knew and could recognize were all combinations of the same 26 letters," Michael says. "The letter board did for Dan what moveable type did for the Western world, but the difference is that before Gutenberg people could still write and Dan could not."

The need for a facilitator to keep Dan focused on the task of spelling out a sentence also raises the issue of ensuring that it's actually Dan who's communicating. Michael says, "I was always very concerned not to impose myself on Dan while helping him as much as possible."

The iPad, therefore, offered the possibility of a more effective letter board that could incorporate predictive text and remember what's been said, and one whose other features might help Dan move on to more efficient - and more independent - communication. Dan's eyes jump so he may miss details in written text, but voiceover can read him email, and what he types into iMean can be copied into an answer. Performing all those steps independently is some way off, but the potential is life-changing.

Michael proposed the app he had in mind to 18-year-old programmer Richard Meade-Miller. "I didn't think it was going to be that hard because Apple has done most of it for you," says Michael, "but it turns out that to write an app you really need to be able to do programming in objective-C. For someone who learned Fortran 35 years ago, that's really difficult."

However, there were constraints. "We wanted the buttons to be as big as possible so Dan would have as little chance of error as possible." That forced some hard choices, such as limiting available punctuation marks to four, and shrinking the backspace button a little smaller than Michael had originally hoped in order to make room for Yes and No keys.

"When somebody like Dan sits down with this he may not be able to spell right away, but he needs to be able to say yes or no or say if something goes wrong on the screen. There should be a No button, bright red and very clear." Getting all that into the available screen space also meant creating a different view for numeric input, needed so Dan can do math problems and to speed entering large numbers.

The iPad's memory is also a constraint. "The program runs very quickly and smoothly, but anybody write an app for this platform has to be careful to release all the things that use memory on a regular basis." For the word prediction feature, iMean uses ZenTap, whose author supplied the code for Meade-Miller to integrate.

Word prediction - as Dan spells out words iMean offers him a changing display of three completed words to choose from - has speeded up the whole process for Dan. But it also, Michael says, has had a noticeable effect on his ability to read, "Because he's reading all day long." A final set of constraints are imposed by Dan's own abilities. Many autistic children do not point, an early developmental milestone. "Dan has started to point a little bit now as a result of tapping things on the letter board." Michael knew that, but he didn't realize how hard it would be for Dan, whose fingers sometimes shake and slip, to distinguish between tapping a key and swiping his fingers across a key - and a few keys are programmed to behave differently if they are swiped rather than tapped. "That may have been a mistake," he says. "It has forced Dan to really concentrate on tapping, so sometimes he double and triple taps.

Dan insisted on making a baseline video the first day so that later they can compare and see how much he's improved.

Their long-term goal is for Dan to be able to communicate with people independently. Whether they get all the way there or not, Michael says, "We know the app works the way we want. He can read a paragraph now instead of just a line - and it's only been three days."

Dan, by voice, is calling it his "stepping stone".

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats comments for unknown reasons. Email netwars@skeptic.demon.co.uk.

September 5, 2008

Return of the browser wars

It was quiet, too quiet. For so long it's just been Firefox/Mozilla/Netscape, Internet Explorer, and sometimes Opera that it seemed like that was how it was always going to be. In fact, things were so quiet that it seemed vaguely surprising that Firefox had released a major update and even long-stagnant Internet Explorer has version 8 out in beta. So along comes Chrome to shake things up.

The last time there were as many as four browsers to choose among, road-testing a Web browser didn't require much technical knowledge. You loaded the thing up, pointed it at some pages, and if you liked the interface and nothing seemed hideously broken, that was it.

This time round, things are rather different. To really review Chrome you need to know your AJAX from your JavaScript. You need to be able to test for security holes, and then discover more security vulnerabilities. And the consequences when these things are wrong are so much greater now.

For various reasons, Chrome probably isn't for me, quite aside from its copy-and-paste EULA oops. Yes, it's blazingly fast and I appreciate that because it separates each tab or window into its own process it crashes more gracefully than its competitors. But the switching cost lies less in those characteristics than in the amount of mental retraining it takes to adapt your way of working to new quirks. And, admittedly based on very short acquaintance, Chrome isn't worth it now that I've reformatted Firefox 3's address bar into a semblance of the one in Firefox 2. Perhaps when Chrome is a little older and has replaced a few more of Firefox's most useful add-ons (or when I eventually discover that Chrome's design means it doesn't need them).

Chrome does not do for browsers what Google did for search engines. In 1998, Google's ultra-clean, quick-loading front page and search results quickly saw off competing, ultra-cluttered, wait-for-it portals like Altavista because it was such a vast improvement. (Ironically, Google now has all those features and more, but it's smart enough to keep them off the front page.)

Chrome does some cool things, of course, as anything coming out of Google always has. But its biggest innovation seems to be more completely merging local and global search, a direction in which Firefox 3 is also moving, although with fewer unfortunate consequences. And, as against that, despite the "incognito" mode (similar to IE8) there is the issue of what data goes back to Google for its coffers.

It would be nice to think that Chrome might herald a new round of browser innovation and that we might start seeing browsers that answer different needs than are currently catered for. For example: as a researcher I'd like a browser to pay better attention to archiving issues: a button to push to store pages with meaningful metadata as well as date and time, the URL the material was retrieved from, whether it's been updated since and if so how, and so on. There are a few offline browsers that sort of do this kind of thing, but patchily.

The other big question hovering over Chrome is standards: Chrome is possible because the World Wide Web Consortium has done its work well. Standards and the existence of several competing browsers with significant market share has prevented any one company from seizing control and turning the Web into the kind of proprietary system Tim Berners-Lee resisted from the beginning. Chrome will be judged on how well it renders third-party Web pages, but Google can certainly tailor its many free services to work best with Chrome - not so different a proposition from the way Microsoft has controlled the desktop.

Because: the big thing Chrome does is bring Google out of the shadows as a competitor to Microsoft. In 1995, Business Week ran a cover story predicting that Java (write once, run on anything) and the Web (a unified interface) could "rewrite the rules of the software industry". Most of the predictions in that article have not really come true - yet - in the 13 years since it was published; or if they have it's only in modest ways. Windows is still the dominant operating system, and Larry Ellison's thin clients never made a dent in the market. The other big half of the challenge to Microsoft, GNU/Linux and the open-source movement, was still too small and unfinished.

Google is now in a position to deliver on those ideas. Not only are the enabling technologies in place but it's now a big enough company with reliable enough servers to make software as a Net service dependable. You can collaboratively process your words using Google Docs, coordinate your schedules with Google Calendar, and phone across the Net with Google Talk. I don't for one minute think this is the death of Microsoft or that desktop computing is going to vanish from the Earth. For one thing, despite the best-laid cables and best-deployed radios of telcos and men, we are still a long way off of continuous online connectivity. But the battle between the two different paradigms of computing - desktop and cloud - is now very clearly ready for prime time.

Wendy M. Grossman's Web site hasn extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 1, 2008

Microhoo!

Large numbers are always fun, and $44.6 billion is a particularly large number. That's how much Microsoft has offered to pay, half cash, half stock, for Yahoo!

Before we get too impressed, we should remember two things: first, half of it is stock, which isn't an immediate drain on Microsoft's resources. Second, of course, is that money doesn't mean the same thing to Microsoft as it does to everyone else. As of last night, Microsoft had $19.09 billion in a nice cash heap, with more coming in all the time. (We digress to fantasise that somewhere inside Microsoft there's a heavily guarded room where the cash is kept, and where Microsoft employees who've done something particularly clever are allowed to roll naked as a reward.)

Even so, the bid is, shall we say, generous. As of last night, Yahoo!'s market cap was $25.63 billion. Yahoo!'s stock has dropped more than 32 percent in the last year, way outpacing the drop of the broader market. When issued, Microsoft's bid of $31 a share represented a 62 percent premium. That generosity tells us two things. First, since the bid was, in the polite market term, "unsolicited", that Microsoft thought it needed to pay that much to get Yahoo!'s board and biggest shareholders to agree. Second, that Microsoft is serious: it really wants Yahoo! and it doesn't want to have to fight off other contenders.

In some cases – most notably Google's acquisition of YouTube – you get the sense that the acquisition is as much about keeping the acquired company out of the hands of competitors as it is about actually wanting to own that company. If Google wanted a slice of whatever advertising market eventually develops around online video clips, it had to have YouTube. Google Video was too little, too late, and if anyone else had bought YouTube Google would never have been able to catch up.

There's an element of that here, in that MSN seems to have no immediate prospect of catching up with Google in the online advertising market. Last May, when a Microsoft-Yahoo! merger was first mooted, CNN noted that even combined MSN and Yahoo! would trail Google in the search market by a noticeable margin. Google has more than 55 percent of the search market; Yahoo! trails distantly with 17 percent and MSN is even further behind with 13 percent. Better, you can hear Microsoft thinking, to trail with 30 percent of the market than 13 percent; unlike most proposals to merge the numbers two and three players in a market, this merger would create a real competitor to the number one player.

In addition, despite the fact that Yahoo!'s profits dropped by 4.6 percent in the last quarter (year on year), its revenues grew in the same period by 11.8 percent. If Microsoft thought about it like a retail investor (or Warren Buffett), it would note two things: the drop in Yahoo!'s share prices make it a much more attractive buy than it was last May; and Yahoo!'s steady stream of revenues makes a nice return on Microsoft's investment all by itself. One analyst on CNBC estimated that return at 5 percent annually – not bad given today's interest rates.

Back in 2000, at the height of the bubble, when AOL merged with Time-Warner (a marriage both have lived to regret), I did a bit of fantasy matchmaking that regrettably has vanished off the Telegraph's site, pairing dot-coms and old-world companies for mergers. In that round, Amazon.com got Wal-Mart (or, more realistically, K-Mart), E*Trade passed up Dow-Jones, publisher of the Wall Street Journal (and may I just say how preferable that would have been to Rupert Murdoch's having bought it) in favor of greater irony with the lottery operator G-Tech, Microsoft got Disney (to split up the ducks), and Yahoo! was sent off to buy Rupert Murdoch's News International.

Google wasn't in the list; at the time, it was still a privately held geeks' favorite, out of the mainstream. (And, of course, some companies that were in the list – notably eToys and QXL – don't exist any more.) The piece shows off rather clearly, however, the idea of the time, which was that online companies could use their ridiculously inflated stock valuations to score themselves real businesses and real revenues. That was before Google showed the way to crack online advertising and turn visitor numbers into revenues.

It's often said that the hardest thing for a new technology company is to develop a second product. Microsoft is one of the few who succeeded in that. But the history of personal computing is still extremely short, and history may come to look at DOS, Windows, and Office as all one product: commercial software. Microsoft has seen off its commercial competitors, but open-source is a genuine threat to drive the price of commodity software to zero, much like the revenues from long distance telephone calls. Looked at that way, there is no doubt that Microsoft's long-term survival as a major player depends on finding a new approach. It has kept pitching for the right online approach: information service, portal, player/DRM, now search/advertising. And now we get to find out whether Google, like very few companies before it, really can compete with Microsoft. Game on.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 16, 2007

Quick fix

The other day, I noticed that the personal finance software I've been using since January 2, 1993 (under DOS!), Quicken, had started responding to requests to download stock quotes this way: "Quicken was unable to process the information in the file that was downloaded. We recommend that you try again." (It's their version of Sisyphus's torture: trying again produces the same message.)

It must be a couple of years since I started seeing warnings that Quicken was going to disable quotes in older versions of its software. My version, 2000, probably should have stopped functioning in 2004. But I'd begun to think it would never happen.

I recognize that by this time I am a valueless customer to Quicken's maker, Intuit, I tried the 2002 version (mostly so I could synch with Pocket Quicken on the Palm) and hated it; I found useless the 2005 version that came on a laptop. Intuit's idea for turning me into a valuable one is apparently to force me to upgrade to Quicken 2007. Even if 200x versions had been good, this wouldn't be a perfect idea: In 2005 Intuit dropped its UK product, and while its US product now handles multiple currencies, what about VAT? Worse, in two (or three) years' time I will be forced to do the whole thing again because of Intuit's sunset policies.

This is, of course, an entirely more aggressive approach than most software companies take. Even Microsoft, which regularly announces the dates on which it will stop supporting older software, doesn't make it inoperable. It's odd remembering that we used to cheer for Intuit, partly because it was a real pioneer in usable interface design, and partly because in the 1990s it was one of very, very few companies that had to compete with Microsoft and succeeded.

The bad news is that this is likely to be what the future is going to look like. Cory Doctorow, who has spent years following Hollywood's efforts to embed copy protection into television broadcasts and home video systems, has warned frequently that the upshot will be that things unexpectedly stop working. TiVo owners have already seen this in action: the company proposed to disable the 30-second skip popular among the advertising avoidant, and also in some cases can how long you can save programs and whether you can make copies.

I'm sure there are other examples that don't spring instantly to mind. It's part of the price of a connected world that the same features that allow benefits such as downloaded information and automatic software updates give the manufacturers options for changing the configuration we paid for at their own discretion. If these were hackers instead of software companies, we'd say they'd installed a "back door" into our systems to allow them to come in and rummage around whenever they wanted to, and we'd be deploying software to disable the back door and keep them out. Instead, we call these things "features" and apparently we're willing to pay software companies to install them.

All software has flaws; part of learning to use it involves figuring out how to work around them. When I bought it, Quicken was one of only two games in town; the other was Microsoft Money, and its notable how few competitors they have. Quicken did a far better job, at least at the beginning, of understanding that personal finance software only really works for you if you can integrate it into the world of banking and credit cards you already live in. Intuit pioneered downloading bank and credit card data. Had I ever learned to use it right, it would have prepared my VAT returns for me.

But I never really did learn to use it right, and it's gotten worse over time as the software has disimproved (as the Irish say). The Quicken files on my computer have become hopelessly lost in confusion over split transactions that don't make sense, invoices that may or may not have been paid; mortgage interest I've never been able to figure out correctly; and stock spin-offs it's adamantly put in the wrong currency. Early on, Intuit created a product called Quick Invoice that did exactly what I wanted: it wrote invoices, simply and reliably, in about five seconds. This functionality was eventually subsumed into Quicken, which made it laborious and unpleasant. My least favorite quirk was that its numbering was unreliable.

I now realize that I had come to hate the software so much that I more or less stopped dealing with it other than for invoices and stock quotes. The original purpose for which I bought it, to save money by keeping track of bank balances had long since been forgotten.

So, I say it's spinach and I say to hell with it. It will cost me at least $50 more than the price of Quicken 2007 to buy a decent piece of invoicing software and something like, say, Moneydance, and quite a few hours to start over from scratch. But at least I know that two years hence I won't be doing it again.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).