Main

October 14, 2022

Signaled

wendyg_railway_signal_tracks_crossing-370.jpgA while back, I was trying to get a friend to install the encrypted messaging app Signal.

"Oh, I don't want another messaging app."

Well, I said, it's not *another* messaging app. Use it to replace the app you currently use for texting (SMS) and it will just sit there showing you your text messages. But whenever you encounter another Signal user those messages will be encrypted. People sometimes accepted this; more often, they wanted to know why I couldn't just use WhatsApp, like their school group, tennis club, other friends... (Well, see, it may be encrypted, but it's still owned by the Facebook currently known as Meta.)

This week I learned that soon I won't be able to make this argument any more, because...Signal will be dropping SMS support for Android users sometime in the next few months. I don't love either the plan or the vagueness of its timing. (For reasons I don't entirely understand, this doesn't apply to the nether world of iPhone users.)

The company's blog posting lists several reasons. Apparently the app's SMS integration is confusing to many users, who are unclear about when their messages are encrypted and when they're not. Whether this is true is being disputed in the related forum thread discussing this decision. On the bah! side is "even my grandmother can use it" (snarl) and on the other the valid evidence of the many questions users have posted about this over the years in the support forums. Maybe solvable with some user interface tweaks?

Second, the pricing differential between texting and Signal messages, which transit the Internet as data, has reversed since Signal began. Where data plans used to be rare and expensive, and SMS texts cheap or bundled with phone service, today data plans are common, and SMS has become expensive in some parts of the world. There, the confusion between SMS and Signal messaging really matters. I can't argue with that except to note that equally it's a problem that does *not* apply in many countries. Again, perhaps solvable with user settings...but it's fair enough to say that supporting this may not be the best use of Signal's limited resources. I don't have insight into the distribution of Signal's global user base, and users in other countries are likely to be facing bigger risks than I am.

Third is sort of a purity argument: it's inherently contradictory to include an insecure protocol in an app intended to protect security and privacy. "Inconsistent with our values." The forum discussion is split on this. While many agree with this position, many of the rest of us live in a world that includes lots of people who do not use, and do not want to use (see above), Signal, and it is vastly more convenient to have a single messaging app that handles both.

Signal may not like to stress this aspect, but one problem with trusting an encrypted messaging app in the first place is that the privacy and security are only as good as your correspondents' intentions. Maybe all your contacts set their messages to disappear after a week, password-protect and encrypt their message database, and assign every contact an alias. Or, maybe they don't password-protect anything, never delete anything, and mirror the device to three other computers, all of which they leave lying around in public. You cannot know for sure. So a certain level of insecurity is baked into the most secure installations no matter what you do. I don't see SMS as the biggest problem here.

I think this decision is going to pose real, practical problems for Signal in terms of retaining and growing its user base; it surely does not want the app's presence on a phone become governments' watch-this-person flag. At least in Western countries, SMS is inescapable. It would be better if two-factor authentication used a less hackable alternative, but at the moment SMS is the widespread vector of corporate choice. We consumers don't actually get to choose to dump it until they do. A switch is apparently happening very slowly behind the scenes in the form of RCS, which I don't even know if my aged phone supports. In the meantime, Signal becomes the "another messaging app" we began with - and historically, diminished convenience has been one of the biggest blocks to widespread adoption of privacy-enhancing technologies.

Signal's decision raises the possibility that we are heading into a time where texting people becomes far more difficult. It may become like the early days, when you could only text people using the same phone company as you - for example, Apple has yet to adopt RCS. Every new contact will have to start with a negotiation by email or phone: how do I text you? In *addition* to everything else.

The Internet isn't splintering (yet); email may be despised, but every service remains interoperable. But the mobile world looks like breaking into silos. I have family members who don't understand why they can't send me iMessages or FaceTime me (no iPhone?), and friends I can't message unless I want to adopt WhatsApp or Telegram (groan - another messaging app?).

Signal may well be right that this move is a win for security, privacy, and user clarity. But for communication? In *this* house, it's a frustrating regression.

Illustrations: Midjourney's rendering of "railway signal tracks crossing",

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 9, 2022

The lost penguin

Little_penguin_(Eudyptula_minor)_at_Kwinana_Beach,_September_2021_13-370.jpgOne of the large, ignored problems of cybersecurity is that every site, every supplier, ever software coder, every hardware manufacturer makes decisions as if theirs were the only rules you ever have to observe.

The last couple of weeks I've been renewing my adventures with Linux, which started in 2016, and continued later that year and again in 2018 and, undocumented, in 2020. The proximate cause this time was the release of Ubuntu 22.04. Through every version back to 14.04 I've had the same longrunning issue: the displays occasionally freeze for no consistent reason, and the only way out is a cold boot. Would this time be the charm?

Of course, the first thing that happened was that trying to upgrade the system in place failed. This isn't my first rodeo (see 2016, part II), and so I know that unpicking and troubleshooting a failure often takes more than doing a clean install. I had an empty hard drive at the ready...

All the good things I said about Ubuntu installation in 2018 are still true: Canonical and the open source community have done a very good job of building a computer-in-a-box. It installed and it worked, although I hate the Gnome desktop it ships with.

Except.

Everything is absolutely fine unless, as I whined in 2018, you want to connect to some Windows machines. For that, you must download and install Samba. When it doesn't work, Samba is horrible, and grappling with it revives all my memories of someone telling me, the first time I heard of Linux, that "Linux is as user-friendly as a cornered rat."

Last time round, I got the thing working by reading lots of web pages and adding more and more stuff to the config file until it worked. This was not necessarily a good thing, because in the process I opened more shares than I needed to, and because the process was so painful I never felt like going back to put in a few constraints. Why would I care? I'm one person with a very small (wired) computer network, and it's OK if the machines see more of each other's undergaments than is strictly necessary.

Since then, the powers that code have been diligently at work to make the system more secure. So to stop people from doing what I did, they have tweaked Samba so that by default it's not possible to share your Home directory. Their idea is that you'll have a Public directory that is the only thing you share, and any file that's in it is there because you made a conscious decision to put it there.

I get the thinking, but I don't want to do things their way, I want to do things my way. And my way is that I want to share three directories inside the Home directory. Needless to say, I am not the only recalcitrant person, and so people have published three workarounds. I did them all. Result: my Windows machines can now access the directories I wanted to share on the Ubuntu machine. And: the Ubuntu machine is less secure for a value of security that isn't necessarily helpful in a tiny wired home network.

That was only half the problem.

Ubuntu can see there's a Windows network, and it will even sometimes list the machines correctly, but ask it to access one of them, and it draws a blank. Almost literally a blank: it just hangs there going, "Opening >machine name<" until you give up and hit Cancel. Someone has wrapped a towel around its head, apparently thinking, like the Bugblatter Beast of Traal, that if it can't see you, you can't see it. I now see that this is exactly the same analogy, in almost the identical words, that I used in 2018. I swear I typed it all new this time.

That someone appears to be Microsoft. The *other* problem, it turns out, is that Microsoft also wanted to improve security, and so it's made it harder to open Windows 10 machines to networking with interlopers such as people who run Ubuntu. I forget now the incantation I had to wave over it to get it to cooperate, but the solution I found only worked to admit the Ubuntu shares, not open up the Windows ones.

Seems to me there's two problems here.

One is the widening gap between consumer products and expert computing. The reality of mass adoption confirms that consumer computing has in fact gotten much easier over time. But the systems we rely on are more sophisticated and complex, and they're meeting more sophisticated and complex needs - and doing anything outside that mainstream has accordingly become much harder, requiring a lot of knowledge, training, patience, and expertise. I fall right into that gap (which is why my website has no Javascript and I'm afraid to touch the blogging software that powers net.wars). In 2016, Samba just worked.

The other, though, is a problem I've touched on before: decisions about product security are made in silos without considering the wider ecosystem and differing contexts in which they're used. Microsoft or Apple's answer to the sort of connection problem I have is buy our stuff. The open source community's reaction isn't much different. Which leaves me....wanting to bang all their heads together.


Illustrations: Little penguin swimming (via Calistemon at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 14, 2022

The visible computer

Windows_Xp_of_Medea.JPGI have a friend I would like to lend anyone who thinks computers have gotten easier in the last 30 years.

The other evening, he asked how to host a Zoom conference. At the time, we were *in* a Zoom call, and I've seen him on many others, so he seemed competent enough.

"Di you have a Zoom account?" I said.

"How do I get that?"

I directed him to the website. No, not the window with our faces; that's the client. "Open up - what web browser do you use?"

"Er...Windows 10?"

"That's the computer's operating system. What do you use to go to a website?"

"Google?"

Did he know how to press ALT-TAB to see the open windows on his system? He did not. Not even after instruction.

But eventually he found the browser, Zoom's website, and the "Join" menu item. He created a password. The password didn't work. (No idea.) He tried to reset the password. More trouble. He decided to finish it later...

To be fair, computers *have* gotten easier. On a 1992 computer, I would have had to write my friend a list of commands to install the software, and he'd have had to type them perfectly every time and learn new commands for each program's individual interface. But the comparative ease of use of today's machines is more than offset by the increased complexity of what we're doing with them. It would never have occurred to my friend even two years ago that he could garnish his computer with a webcam and host video chats around the world.

I was reminded of this during a talk on new threats to privacy that touched on ubiquitous computing and referenced the 1991 paper The Computer for the 21st Century, by Marc Weiser, then head of the famed Xerox PARC research lab.

Weiser imagined the computer would become invisible, a theme also picked up by Donald Norman in his 1998 book, The Invisible Computer. "Invisible" here means we stop seeing it, even though it's everywhere around us. Both Weiser and Norman cited electric motors, which began as large power devices to which you attached things, and then disappeared inside thousands of small and large appliances. When computers are everywhere, they will stop commanding our attention (except when they go wrong, of course). Out of sight, out of mind - but in constant sight also means out of mind because our brains filter out normal background conditions to focus on the exceptional.

Weiser's group built three examples, which they called tabs (inch-scale), pads (foot-scale), and boards (yard-scale). His tabs sound rather like today's tracking tags. Like the Active Badges at Olivetti Research in Cambridge they copied (the privacy implications of which horrified the press at the time), they could be used to track people and things, direct calls, automate diary-keeping, and make presentations and research portable throughout the networked area. In 2013, when British journalist Simon Bisson revisited this same paper, he read them more broadly as sensors and effectuators. Pads, in Weiser's conception, were computerized sheets of "scrap" paper to be grabbed and used anywhere and left behind for the next person. Weiser called them an "antidote to windows", in that instead of cramming all programs into a window you could spread dozens of pads across a full-sized desk (or floor) to work with. Boards were displays, more like bulletin boards, that could be written on with electronic "chalk" and shared across rooms.

"The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them," Weiser wrote.

In 2013, Bisson suggested Weiser's "embodied virtuality" was taking shape around us as sensors began enabling the Internet of Things and smartphones became the dominant interface to the Internet. But I like Weiser's imagined 21st century computing better than what we actually have. While cloud services can make our devices more or less interchangeable as long as we have the right credentials, that only works if broadband is uninterruptedly reliable. But even then, has anyone lost awareness of the computer - phone - in their hand or the laptop on their desk? Compare today to what Weiser thought would be the case 20 years later - which would have been 2011:

Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.

Who feels like that? Certainly not the friend we began with. Even my computer expert friends seem one and all convinced that their computers hate them. People in search of relaxation watch TV (granted, maybe on a computer), play guitar (even if badly), have a drink, hang with friends and family, play a game (again, maybe on a computer), work out, tale a bath. In fact, the first thing people do when they want to relax is flee their computers and the prying interests that use them to spy on us. Worse, we no longer aspire to anything better. Those aspirations have all been lost to A/B testing to identify the most profitable design.


Illustrations: Windows XP's hillside wallpaper (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 23, 2019

Antepenultimate

rotated-gchq-secure-phone.jpegFor many reasons, I've never wanted to use my mobile phone for banking. For one thing, I have a desktop machine with three 24-inch monitors and a full-size functioning keyboard; why do I want to poke at a small screen with one finger?

Even if I did, the corollary is that mobile phones suck for typing passwords. For banking, you typically want the longest and most random password you can generate. For mobile phone use, you want something short, easy to remember and type. There is no obvious way to resolve this conflict, particularly in UK banking, where you're typically asked to type in three characters chosen from your password. It is amazingly easy to make mistakes counting when you're asked to type in letter 18 of a 25-character random string. (Although: I do admire the literacy optimism one UK bank displays when it asks for the "antepenultimate" character in your password. It's hard to imagine an American bank using this term.)

Beyond that, mobile phones scare me for sensitive applications in general; they seem far too vulnerable to hacking, built-in vulnerabilities, SIM swapping, and, in the course of wandering the streets of London, loss, breakage, or theft. So mine is not apped up for social media, ecommerce, or anything financial. I accept that two-factor authentication is a huge step forward in terms of security, but does it have to be on my phone? In this, I am, of course, vastly out of step with the bulk of the population, who are saying instead: "Can't it be on my phone?" What I want, however, is a 2FA device I can turn off and stash out of harm's way in a drawer at home. That approach would also mean not having to give my phone number to an entity that might, like Facebook has in the past, coopt it into their marketing plans.

So, it is with great unhappiness that I discover that the combination of the incoming Payment Services Directive 2 and the long-standing effort to get rid of cheques are combining to force me to install a mobile banking app.

PSD2 possibly will may perhaps not have been the antepenultimate gift from the EU28. At Wired, Laurie Clark explains the result of the directive's implementation, which is that ecommerce sites, as well as banks, must implement two-factor authentication (2FA) by September 14. Under this new regime, transactions above £30 (about $36.50, but shrinking by the Brexit-approaching day) will require customers to prove at least two of the traditional three security factors: something they have (a gadget such as a smart phone, a specific browser on a specific machine, or a secure key,, something they know (passwords and the answers to secondary questions), and something they are (biometrics, facial recognition). As Clark says, retailers are not going to love this, because anything that adds friction costs them sales and customers.

My guess is that these new requirements will benefit larger retailers and centralized services at the expense of smaller ones. Paypal, Amazon, and eBay already have plenty of knowledge about their customers to exploit to be confident of the customer's identity. Requiring 2FA will similarly privilege existing relationships over new ones.

So far, retail sites don' t seem to be discussing their plans. UK banking sites, however, began adopting 2FA some years ago, mostly in the form of secure keys that they issued and replaced as needed - credit card-sized electronic one-time pads. Those sites now are simply dropping the option of logging on with limited functionality without the key. These keys have their problems - especially non-inclusive design with small, fiddly keys and hard-to-read LCD screens - but I liked the option.

Ideally, this would be a market defined by standards, so people could choose among different options - such as the Yubikey, Where the banks all want to go, though, is to individual mobile phone apps that they can also use for marketing and upselling. Because of the broader context outlined above, I do not want this.

One bank I use is not interested in my broader context, only its own. It has ruled: must download app. My first thought was to load the app onto my backup, second-to-last phone, figuring that its unpatched vulnerabilities would be mitigated by its being turned off, stuck in a drawer, and used for nothing else. Not an option: its version of Android is two decimal places too old. No app for *you*!

At Bentham's Gaze, Steven Murdoch highlights a recent Which? study that found that those who can't afford, can't use, or don't want smartphones or who live with patchy network coverage will be shut out of financial services.

Murdoch, an expert on cryptography and banking security, argues that by relying on mobile apps banks are outsourcing their security to customers and telephone networks, which he predicts will fail to protect against criminals who infiltrate the phone companies and other threats. An additional crucial anti-consumer aspect is the refusal of phone manufacturers to support ongoing upgrades, forcing obsolescence on a captive audience, as we've complained before. This can only get worse as smartphones are less frequently replaced while being pressed into use for increasingly sensitive functions.

In the meantime, this move has had so little press that many people are being caught by surprise. There may be trouble ahead...

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 11, 2018

The third penguin

two-angry-penguins.jpgYou never have time to disrupt yourself and your work by updating your computer's software until Bad Things happen and you're forced to find the time you don't have.

So last week the Ubuntu machine's system drive, which I had somehow failed to notice dated to 2012, lost the will to live. I had been putting off upgrading to 64-bit; several useful pieces of software are no longer available in 32-bit versions, such as Signal for Desktop, Free File Sync, and Skype.

It transpired that 18.04 LTS had been released a few days earlier. Latest version means longer until forced to upgrade, right?

The good news is that Ubuntu's ease of installation continues to improve. The experience of my first installation, about two and a half years ago, of trying umpteen things and hoping one would eventually work, is gone. Both audio and video worked first time out, and although I still had to switch video drivers, but I didn't have to search AskUbuntu to do it. Even more than my second installation Canonical has come very, very close to one-click installation. The video freezes that have been plaguing the machine since the botched 16.04 update in 2016 appear to have largely gone.

However, making it easy also makes some things hard. Reason: making it easy means eliminating things that require effort to configure and that might complicate the effortlessness. In the case of 18.04, that means that if you have a mixed network you still have to separately download and configure Samba, the thing that makes it possible for an Ubuntu machine to talk to a Windows machine. I understand this choice, I think: it's reasonable to surmise that the people who need an easy installation are unlikely to have mixed networks, and the people who do have them can cope with downloading extra software. But Samba is just mean.

An ideal installation routine would do something like:
- Ask the names and IP addresses of the machines you want to connect to;
- Ask what directories you want to share;
- Use that information to write the config file;
- Send you to pages with debugging information if it doesn't work.

Of course, it doesn't work like that. I eventually found the page I think helped me most last time. That half-solved the problem, in that the Windows machines could see the Ubuntu machine but not the reverse. As far as I could tell, the Ubuntu machine had adopted the strategy of the Ravenous Bug Blatter Beast of Traal and wrapped a towel around its head on the basis that if it couldn't see them they couldn't see *it*.

Many DuckDuckGo searches later the answer arrived: apparently for 18.04 the decisions was made to remove a client protocol. The solution was to download and install a bit of software called smbclient, which would restore the protocol. That worked.

Far more baffling was the mysterious, apparently random appearance of giant colored graphics in my Thunderbird inbox. All large enough to block numerous subject lines. This is not an easy search to frame, and I've now forgotten the magical combination of words that produced the answer: Ubuntu 18.04 has decorated itself with a colorful set of bright, shiny *emoji*. These, it turns out, you can remove easily. Once you have, the symbols sent to torture you shrink back down to tiny black and white blogs that disturb no one. Should you feel a desperate need to find out what one is, you can copy and paste it into Emojipedia, and there it is: that thing you thought was a balloon was in fact a crystal ball. Like it matters.

I knew going in that Unity, the desktop interface that came with my previous versions of Ubuntu, had been replaced by Gnome, which everyone predicted I would hate.

The reality is that it's never about whether a piece of software is good or bad; it's always about what you're used to. If your computer is your tool rather than your plaything, the thing you care most about is not having to learn too much that's new. I don't mind that the Ubuntu machine doesn't look like Windows; I prefer to have the reminder that it's different. But as much as I'd disliked it at first, I'd gotten used to the way Unity groups and displays windows, the size of the font it used, and the controls for configuring it. So, yes, Gnome annoyed, with its insistence on offering me apps I don't want, tiny grey fonts, wrong-side window controls, and pointless lockscreens that all wanted recofniguration. KDE desktop, which a friend insisted I should try, didn't seem much different. It took only two days to revert to Unity, which is now "community-maintained", polite GNU/Linux-speak for "may not survive for long". Back to some version of normal.

In my view, Ubuntu could still fix some things. It should be easier to add applications to the Startup list. The Samba installation should be automated and offered as an option in system installation with a question like, "Do you need to connect to a Windows machine on your network?" User answers yes or no, Samba is installed or not with a script like that suggested above.

But all told, it remains remarkable progress. I salute the penguin wranglers.


Illustrations: Penguins.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2012

Insecure at any speed

"I have always depended on the kindness of strangers," Blanche says graciously to the doctor hauling her off to the nuthouse at the end of Tennessee Williams' play A Streetcar Named Desire. And while she's quite, quite mad in her genteel Old Southern delusional way she is still nailing her present and future situation, which is that she's going to be living in a place where the only people who care about her are being paid to do so (and given her personality, that may not be enough).

Of course it's obvious to anyone who's lying in a hospital bed connected to a heart monitor that they are at the mercy of the competence of the indigenous personnel. But every discussion of computer passwords tends to go as though the problem is us. Humans choose bad passwords: short, guessable, obvious, crackable. Or we use the same ones everywhere, or we keep cycling the same two or three when we're told to change them frequently. We are the weakest link.

And then you read this week's stories that major sites for whom our trust is of business-critical importance - LinkedIn, eHarmony, and Last.fm" - have been storing these passwords in such a way that they were vulnerable to not only hacking attacks but also decoding once they had been copied. My (now old) password, I see by typing it into LeakedIn for checking, was leaked but not cracked (or not until I typed it in, who knows?).

This is not new stuff. Salting passwords before storing them - the practice of adding random characters to make the passwords much harder to crack - has been with us for more than 30 years. If every site does these things a little differently, the differences help mitigate the risk we users bring upon ourselves by using the same passwords all over the place. It boggles the mind that these companies could be so stupid as to ignore what has been best practice for a very long time.

The leak of these passwords is probably not immediately critical. For one thing, although millions of passwords leaked out, they weren't attached to user names. As long as the sites limit the number of times you can guess your password before they start asking you more questions or lock you out, the odds that someone can match one of those 6.5 million passwords to your particular account are...well, they're not 6.5 million to one if you've used a password like "password" or "1233456", but they're small. Although: better than your chances of winning the top lottery prize.

Longer term may be the bigger issue. As Ars Technica notes, the decoded passwords from these leaks and their cryptographically hashed forms will get added to the rainbow tables used in cracking these things. That will shrink the space of good, hard-to-crack passwords.

Most of the solutions to "the password problem" aim to fix the user in one way or another. Our memories have limits - so things like Password Safe will remember them for us. Or those impossible strings of letters and numbers are turned into a visual pattern by something like GridSure, which folded a couple of years ago but whose software and patents have been picked up by CryptoCard.

An interesting approach I came across late last year is sCrib, a USB stick that you plug into your computer and that generates a batch of complex passwords it will type in for you. You can pincode-protect the device and it can also generate one-time passwords and plug into a keyboard to protect against keyloggers. All very nice and a good idea except that the device itself is so *complicated* to use: four tiny buttons storing 12 possible passwords it generates for you.

There's also the small point that Web sites often set rules such that any effort to standardize on some pattern of tough password is thwarted. I've had sites reject passwords for being too long, or for including a space or a "special character". (Seriously? What's so special about a hyphen?) Human factors simply escape the people who set these policies, as XKCD long ago pointed out.

But the key issue is that we have no way of making an informed choice when we sign up for anything. We have simply no idea what precautions a site like Facebook or Gmail takes to protect the passwords that guard our personal data - and if we called to ask we'd run into someone in a call center whose job very likely was to get us to go away. That's the price, you might say, of a free service.

In every other aspect of our lives, we handle this sort of thing by having third-party auditors who certify quality and/or safety. Doctors have to pass licensing exams and answer to medical associations. Electricians have their work inspected to ensure it's up to code. Sites don't want to have to explain their security practices to every Sheldon and Leonard? Fine. But shouldn't they have to show *someone* that they're doing the right things?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.