" /> net.wars: December 2018 Archives

« November 2018 | Main | January 2019 »

December 28, 2018

Opening the source

Participants_at_Budapest_meeting,_December_1,_2001.jpegRecently, Michael Salmony, who has appeared here before appeared horrified to discover open access, the movement for publishing scientific research so it's freely accessible to the public (who usually paid for it) instead of closed to subscribers. In an email, he wrote, "...looks like the Internet is now going to destroy science as well".

This is not my view.

The idea about science that I grew up with was that scientists building on and reviewing each other's work is necessary for good science, a self-correcting process that depends on being able to critique and replicate each other's work. So the question we should ask is: does the business model of traditional publishing support that process? Are there other models that would support that process better? Science spawns businesses, serves businesses, and may even be a business itself, but good-quality science first serves the public interest.

There are three separate issues here. The first is the process of science itself: how best to fund, support, and nurture it. The second is the business model of scientific *publishing*. The third, which relates to both of those, is how to combat abuse. Obviously, they're interlinked.

The second of these is the one that resonates with copyright battles past. Salmony: "OA reminds me warmly of Napster disrupting music publishing, but in the end iTunes (another commercial, quality controlled) model has won."

iTunes and the music industry are not the right models. No one dies of lack of access to Lady Gaga's latest hit. People *have* died through being unable to afford access to published research.

Plus, the push is coming from an entirely different direction. Napster specifically and file-sharing generally were created by young, anti-establishment independents who coded copyright bypasses because they could. The open access movement began with a statement of principles codified by university research types - mavericks, sure, but representing the Public Library of Science, Open Society Institute, BioMed Central, and universities in Montreal, London, and Southampton. My first contact with the concept was circa 1993, when World Health Organization staffer Christopher Zielinski raised the deep injustice of pricing research access out of developing countries' reach.

Sci-Hub is a symptom, not a cause. Another symptom: several months ago, 60 German universities canceled their subscriptions to Elsevier journals to protest the high fees and restricted access. Many scientists are offended at the journals' expectation that they will write papers for free and donate their time for peer review while then charging them to read the published results. One way we know this is that Sci-Hub builds its giant cache via educational institution proxies that bypass the paywalls. At least some of these are donated by frustrated people inside those institutions. Many scientists use it.

As I understand it, publication costs are incorporated into research grants; there seems no reason why open access should impede peer review or indexing. Why shouldn't this become financially sustainable and assure assure quality control as before?

A more difficult issue is that one reason traditional journals still matter is that academic culture has internalized their importance in determining promotions and tenure. Building credibility takes time, and many universities have been slow to adapt. However, governments and research councils in Germany, the UK, and South Africa are all pushing open access policies via their grant-making conditions.

Plus, the old model is no longer logistically viable in many fields as the pace of change accelerates. Computer scientists were first to ignore it, relying instead on conference proceedings and trading papers and research online.

Back to Salmony: "Just replacing one bad model with another one that only allows authors who can afford to pay thousands of dollars (or is based on theft, like Sci Hub) and that threatens the quality (edited, peer review, indexed etc) sounds less than convincing." In this he's at odds with scientists such as Ben Goldacre, who in 2007 called open access "self-evidently right and good".

This is the first issue. In 1992, Marcel C. LaFollette's Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing documented many failures of traditional peer review. In 2010, the Greek researcher John Ioannidis established how often medical research is retracted. At Retraction Watch, science journalist Ivan Oransky finds remarkable endemic sloppiness and outright fraud. Admire the self-correction, but the reality is that journals have little interest in replication, preferring newsworthy new material - though not *too* new.

Ralph Merkle, the "third man", alongside Whit Diffie and Martin Hellman, inventing public key cryptography, has complained that journals favor safe, incremental steps. Merkle's cryptography idea was dismissed with: "There is nothing like this in the established literature." True. But it was crucial for enabling ecommerce.

Salmony's third point: "[Garbage] is the plague of the open Internet", adding a link to a Defon 26 talk. Sarah Jeong's Internet of Garbage applies.

Abuse and fakery are indeed rampant, but a lot is due to academic incentives. For several years, my 2014 article for IEEE Security & Privacy explaining the Data Retention and Investigatory Powers Act (2014) attracted invitations to speak at (probably) fake conferences and publish papers in (probably) fake journals. Real researchers tell me this is par for the course. But this is a problem of human predators, not "the open Internet", and certainly not open access.


Illustrations: Participants in drafting the Budapest principles (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 21, 2018

Behind you!

640px-Aladdin_pantomime_Nottingham_Playhouse_2008.jpgFor one reason or another - increasing surveillance powers, increasing awareness of the extent to which online activities are tracked by myriad data hogs, Edward Snowden - crypto parties have come somewhat back into vogue over the last few years after a 20-plus-year hiatus. The idea behind crypto parties is that you get a bunch of people together and they all sign each other's keys. Fun! For some value of fun.

This is all part of the web of trust that is supposed to accrue when you use public key cryptography software like PGP or GPG: each new signature on a person's public key strengthens the trust you can have that the key truly belongs to that person. In practice, the web of trust, also known as "public key infrastructure", does not scale well, and the early 1990s excitement about at least the PGP version of the idea died relatively quickly.

A few weeks ago, ORG Norwich held such a meeting and I went along to help workshop about when and how you want to use crypto. Like any security mechanism, encrypting email has its limits. Accordingly, before installing PGP and saying, "Secure now!" a little threat modeling is a fine thing. As bad as it can be to operate insecurely, it is much, much worse to operate under the false belief that you are more secure than you are because the measures you've taken don't fit the risks you face.

For one thing, PGP does nothing to obscure metadata - that is, the record of who sent email to whom. Newer versions offer the option to encrypt the subject line, but then the question arises: how do you get busy people to read the message?

For another thing, even if you meticulously encrypt your email, check that the recipient's public key is correctly signed, and make no other mistakes, you are still dependent on your correspondent to take appropriate care of their archive of messages and not copy your message into a new email and send it out in plain text. The same is true of any other encrypted messaging program such as Signal; you depend on your correspondents to keep their database encrypted and either password-protect their phone and other devices or keep them inaccessible. And then, too, even the most meticulous correspondent can be persuaded to disclose their password.

For that reason, in some situations it may in fact be safer not to use encryption and remain conscious that anything you send may be copied and read. I've never believed that teenagers are innately better at using technology than their elders, but in this particular case they may provide role models: research has found that they are quite adept at using codes only they understand. To their grown-ups, it just looks like idle Facebook chatter.

Those who want to improve their own and others' protection against privacy invasion therefore need to think through what exactly they're trying to achieve.

Some obvious questions are, partly derived from Steve Bellovin's book Thinking Security:

- Who might want to attack you?
- What do they want?
- Are you a random target, the specific target, or a stepping stone to mount attacks on others?
- What do you want to protect?
- From whom do you want to protect it?
- What opportunities do they have?
- When are you most vulnerable?
- What are their resources?
- What are *your* resources?
- Who else's security do you have to depend on whose decisions are out of your control?

At first glance, the simple answer to the first of those is "anyone and everyone". This helpful threat pyramid shows the tradeoff between the complexity of the attack and the number of people who can execute it. If you are the target of a well-funded nation-state that wants to get you, just you, and nobody else but you, you're probably hosed. Unless you're a crack Andromedan hacker unit (Bellovin's favorite arch-attacker), the imbalance of available resources will probably be insurmountable. If that's your situation, you want expert help - for example, from Citizen Lab.

Most of us are not in that situation. Most of us are random targets; beyond a raw bigger-is-better principle, few criminals care whose bank account they raid or which database they copy credit card details from. Today's highly interconnected world means that even a small random target may bring down other, much larger entities when an attacker leverages a foothold on our insignificant network to access the much larger ones that trusts us. Recognizing who else you put at risk is an important part of thinking this through.

Conversely, the point about risks that are out of your control is important. Forcing everyone to use strong, well-designed passwords will not matter if the site they're used for stores them in with inadequate protections.

The key point that most people forget: think about the individuals involved. Security is about practice, not just technology; as Bruce Schneier likes to say, it's a process not a product. If the policy you implement makes life hard for other people, they will eventually adopt workarounds that make their lives more manageable. They won't tell you what they've done, and you won't have anyone to shout to warn you where the risk is lurking.

Illustrations: Aladdin panomime at Nottingham Playhouse, 2008 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2018

Entirely preventable

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgThis week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.

How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?

The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?

The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.

If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.

But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.

Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.

In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.

As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.

In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.


Illustrations: GCHQ listening post at dawn (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 6, 2018

Richard's universal robots

Praminda Caleb-Solly -4.jpegThe robot in the video is actually a giant hoist attached to the ceiling. It has big grab bars down at the level of the person sitting on the edge of the bed, waiting. When the bars approach, she grabs them, and lets the robot slowly help her up into a standing position, and then begins to move forward.

This is not how any of us imagines a care robot, but I am persuaded this is more like our future than the Synths in 2015's Humans, which are incredibly humanoid (helpfully for casting) but so, so far from anything ready for deployment. This thing, which Praminda Caleb-Solly showed at work in a demonstration video at Tuesday's The Shape of Things conference, is a work in progress. There are still problems, most notably that your average modern-build English home has neither high enough ceilings nor enough lateral space to accommodate it. My bedroom is about the size of the stateroom in the Marx Brothers movie A Night at the Opera; you'd have to put it in the hall and hope the grab bar assembly could reach through the doorway. But still.

As the news keeps reminding us, the the Baby Boomer bulge will soon reach frailty. In industrialized nations, where mobility, social change, and changed expectations have broken up extended families, need will explode. In the next 12 years, Caleb-Solly said, a fifth of people over 80 - 4.8 million people in the UK - will require regular care. Today, the National Health Service is short almost 250,000 staff (a problem Brexit exacerbates wholesale). Somehow, we'll have to find 110,000 people to work in social care in England alone. Technology is one way to help fill that gap. Today, though, 30% of users abandon their assistive technologies; they're difficult to adapt to changing needs, difficult to personalize, and difficult to interact with.

Personally, I am not enthusiastic about having a robot live in my house and report on what I do to social care workers. But I take Caleb-Solly's point when she says, "We need smart solutions that can deal with supporting a healthy lifestyle of quality". That ceiling-hoist robot is part of a modular system that can add functions and facilities as people's needs and capacity change over time.

Thumbnail image for werobot-pepper-head_zpsrvlmgvgl.jpgIn movies and TV shows, robot assistants are humanoids, but that future is too far away to help the onrushing 4.8 million. Today's care-oriented robots have biological, but not human, inspirations: the PARO seal, or Pepper, which Caleb-Solly's lab likes because it's flexible and certified for experiments in people's homes. You may wonder what intelligence, artificial or otherwise, a walker needs, but given sensors and computational power the walker can detect how its user is holding it, how much weight it's bearing, whether the person's balance is changing, and help them navigate. I begin to relax: this sounds reasonable. And then she says, "Information can be conveyed to the carer team to assess whether something changed and they need more help," and I close down with suspicion again. That robot wants to rat me out.

There's a simple fix for that: assume the person being cared for has priorities and agency of their own, and have the robot alert them to the changes and let them decide what they want to do about it. That approach won't work in all situations; there are real issues surrounding cognitive decline, fear, misplaced pride, and increasing multiple frailties that make self-care a heavy burden. But user-centered design can't merely mean testing the device with real people with actual functional needs; the concept must extend to ownership of data and decision-making. Still, the robot walker in Caleb-Solly's lab taught her how to waltz. That has to count for something.

The project - CHIRON, for Care at Home using Intelligent Robotic Omni-functional Nodes - is a joint effort between Three Sisters Care, Caleb-Solly's lab, and Shadow Robot, and funded with £2 million over two years by Innovate UK.

Shadow Robot was the magnet that brought me here. One of the strangest and most eccentric stories in an already strange and eccentric field, Shadow began circa 1986, when the photographer Richard Greenhill was becalmed on a ship with nothing to do for several weeks but read the manual for the Sinclair ZX 81. His immediate thought: you could control a robot with one of those! His second thought: I will build one.

greenhill-rotated-2.jpegBy 1997, Greenhill's operation was a band of volunteers meeting every week in a north London house filled with bits of old wire and electronics scrounged from junkyards. By then, Greenhill had most of a hominid with deceptively powerful braided-cloth "air muscles". By my next visit, in 2009, former volunteer Rich Walker had turned Shadow into a company selling a widely respected robot hand, whose customers include NASA, MIT, and Carnegie-Mellon. Improbably, the project begun by the man with no degrees, no funding, and no university affiliation has outlasted numerous more famous efforts filled with degree-bearing researchers who used up their funding, published, and disbanded. And now it's contributing robotics research expertise to CHIRON.

Seen Tuesday, Greenhill was eagerly outlining a future in which we can all build what we need and everyone can live for free. Well, why not?


Illustrations: Praminda Caleb-Solly presenting on Tuesday (Kois Miah); Pepper; Richard Greenhill demonstrating his personally improved scooter.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.