" /> net.wars: April 2017 Archives

« March 2017 | Main | May 2017 »

April 28, 2017

Underbanking

Checks_Cashed_(17626366404).jpgThere's this word I never heard until I started coming to Consult Hyperion's annual Tomorrow's Transactions Forum: "underbanked". It's definitely a thing. In its 2015 biannual survey, the Federal Deposit Insurance Corporation, which insures American consumers' bank accounts, estimates that 8% of Americans have no bank account and 20% are "underbanked", which means that although they have a bank account they also rely on alternative financial services such as check cashers, informal friends and family lending, and payday lenders for at least some of their needs. Globally, 2 billion people were unbanked in 2015.

At this year's Forum (for past years see 2016, 2015, 2014, 2012, and 2010), University of Pennsylvania professor Lisa Servon challenged it. "Have you ever heard anyone say they wish they had more banking in their lives?" she asked. Policy makers tend to say, "How can we get them into bank accounts?" instead of, "How can we get banks to serve these people better?"

lisaservon.jpegServon is author of The Unbanking of America. In it she commits an act of journalism and works for four months at the check cashing service RiteCheck, and later at a payday lender. Affluent people generally presume that anyone using these "predatory" services is desperate and too naive to understand they're being gouged. Servon finds instead that most are people in difficult situations making rational decisions. They *are* often desperate - but banks are among the reasons. It's in the language: if some are "creditworthy", then some must be *un*worthy, and by implication they must be improved.

Servon argues that changes in policy and banking regulations have led to escalating fees and practices that are decidedly hostile to consumers, especially those whose lives are so precariously financed that any glitch - a car breakdown, illness, a late pay check - cascades into damage from which recovery can take years. Contracting wages and government safety nets add to the pressure. So, as she says, if it's Friday and you need groceries for the weekend, it is more predictable and safer to pay RiteCheck's 2% fee to cash your paycheck than to risk that the check you write for groceries will hit during the bank-imposed delay in crediting the funds. In other words, on a $200 check you pay $2 to eliminate the chance of bank charges or $30 to $100. In a panel, Paul Makin, Chyp's head of financial inclusion, used a word to describe the banks' approach that applies equally to Servon's customers:"derisking". Makin then pointed out that in the UK direct debit poses the same set of risks for people with unpredictable income streams, such as freelances.

paulmakin.jpg"It's frustrating," Makin added, "because we have the power to reengineer Direct Debit." In his example, sending a notification of an upcoming payment and asking people to push a button to pay when they were ready would remove much of that risk for many.

Financial inclusion has frequently popped up at these events (Makin was involved in designing Kenya's M-Pesa), but where the previous years linked above buzzed about mobile wallets, the blockchain, or the Internet of Things, and the first event was obsessed with virtual currencies, this year none of these commanded the stage. Instead, the oncoming train was the incoming Payment Services Directive 2, which is intended to open up banking to new players and services. As "open banking" becomes a reality over the next two years, interoperability becomes an enormous issue - and so inevitably does inclusion. As the session on transit made plain, the next generation of large public systems cannot be built without inclusion, and it's more failure-prone and expensive to build systems to serve the rich and (shrinking) middle class and then to find ways to bolt on support for the less advantaged. Instead, the more flexible your system and the more approaches it supports the better. As Ben Whitaker, a co-founder of the the mobile ticketing company Masabi, said, no medium will cover 100% of the population, so deploying multiple options is essential. For Servon's stressed-out former customers, Transport for London's contactless debit card payments may present a vicious circle waiting to happen: debit hits bank at wrong moment, bank bounces and charges fee, customer can't board bus to work, customer gets fired.

The split between people who see bank accounts as the first step to respectability and those for whom they pose unacceptable risks is about more than just money, as the freelance example above shows. Servon cites studies to show that millennials, under far more pressure than their parents were at their age, want much more help from financial services than banks understand and that today's newcomer apps are beginning to offer. Just as many are not interested in owning cars, many are not interested in having traditional bank accounts and may never become so. The result is also a split between those who want "frictionless" payments - merchants, who want us to spend more without noticing soon enough to stop ourselves and people with more money than patience. For the rest of us, friction that makes us pause to say, "What am I doing here?" is a good thing.


Illustrations:: Checks cashed, Paul Makin, Lisa Servon.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 21, 2017

Who needs the internet?

114_rp_wi_5_sensenbrenner_f.jpgIn August 1999, I was commissioned by the then-new magazine ComputerActive to write a piece on "technostress" - how people handle information overload. It turned out that everyone I knew had a strategy, which usually involved being a refusenik to some particular thing. A science fiction writer friend, for example, explained that he never made phone calls. If someone wanted to talk to him enough, they'd call him. His sister had no answering machine so she wouldn't ever have to call anyone back. Someone else had an email address but never disclosed it to anyone because "some people spend two hours every morning just doing their email!"

An investment bank researcher went to bed at 9pm every night to escape the constant stream of information - and the after-effects of a long day spent in an over-airconditioned office under too-bright fluorescent lighting surrounded by the cacophony of the trading floor. One academic during a particularly stressful time kept his home undisturbed by having no phone there. And so on. Around then, I remember asking a Labour MP who was opining at length about What Needed to Be Done about the internet, "Do you answer your own email?" He looked at me as if I'd asked him if he did his own grocery shopping. "My secretary does it." Right. But he's the one making policy about it.

Since then, the influx of information has increased, but we're more used to it. Even so: there are still hold-outs. A new acquaintance (late 60s, I guess) refuses to have either a mobile phone or use the internet. "People can phone me," she said, which sounds reasonable until you talk to someone who needs to communicate with her regularly: "It drives me crazy."

This week, Congressman Jim Sensenbrenner (R-WI) justified voting to overturn the FCC's privacy rules and allow ISPs to exploit their subscribers' browsing histories by saying, "Nobody has to use the internet". As "Ccumming" commented at Ars Technica observed, "To prove his point he should not use the internet for his re-election campaign and see how it goes."

Sensenbrenner is 73, which means that the internet has only been available to him for two-thirds of his life and it's unclear if he's ever used it himself. I'm ten years younger, and the internet-less proportion of my own life is 58%, and shrinking steadily. When you can remember living a rich, full life without the internet, it surely seems more luxury than necessity. However, the switch from "who would want that?" to "who really needs that?" to "what do we do about the people who don't have it?" was amazingly short - people watching the upward takeup trajectory began fretting about the digital divide in the 1990s. Lastminute.com founder Martha Lane Fox has been campaigning for years to get that last recalcitrant percentage online.

Martha_Lane_Fox.jpgIn September 2016, Pew Research reported that 13% of American adults don't use the internet, a number that hasn't budged in three years. Who are they? Pew asked. They are: 41% of over-65s; a third of adults who didn't finish high school; and disproportionate numbers of both rural residents and people in households with annual incomes of less than $30,000. So: older people, less-educated people, poorer people, and rural people. It's easy to surmise the latter two groups may lack affordable access. For comparison, in the UK, Ofcom finds that 86% of adults have the internet at home, and spend an average of 25 hours a week using it. So: 14% don't have it, about half of whom don't intend to get it. Most of these (83%) are older people. Half don't think they need it; a quarter (mostly over 65) think they're "too old"; another quarter don't want to "own a computer". A declining number (15%) think it's too expensive.

Sensenbrenner was answering a constituent, who was trying to differentiate between content companies like Google, for which there are alternatives (try DuckDuckGo), and ISPs, which in many parts of the US are effective monopolies. By contrast, in the UK, where regulation has forced BT to open access to its infrastructure to competitors, in many locations there are five or six consumer ISPs to choose among, and many more business ISPs. Even so, in 2014 when Cybersalon collected frustrations at the Web We Want festival, inadequate access came top of the list.

So: who doesn't need the internet? In the early 1990s, the snide response would have been, "People with friends." Now, the profile of the person who can live without it looks something like this: has no schoolchildren (because homework is increasingly assigned, completed, and turned in online); has no need to apply for jobs (because job applications...ditto); is their own boss; either has no interest in news or still has a local newspaper and/or TV station; still has a nearby bank, book store, movie theater, grocery store, and whatever else they require to live their idea of a civilized life; has access to traditional media to publish and distribute whatever they want; and is either in no demand at all and doesn't care, or in so much demand that they can set the terms of engagement, like my friend who doesn't make phone calls.

So what Sensenbrenner means is that *he* doesn't need the internet. He is in a small and shrinking class. If he used the internet, he would know this.


Illustrations:: Jim Sensenbrenner; Martha Lane Fox.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 14, 2017

Re-accommodating...

cropped-united-pr-ord.jpgLast year, some folks at MIT implemented a version of the trolley problem they called the Moral Machine, a series of scenarios in which you choose which people and/or pets a self-driving car should sacrifice in a fatal accident. I chose to save the old people at the expense of the children and animals, on the basis that experience and knowledge are societally expensive to replace.

moralmachine.pngThe problem with this experiment is its unreality. Neither humans nor machines make decisions this way. For humans, instincts of pure self-preservation kick in. Mostly, we just try to stop, like the people in the accelerating Toyotas (whose best strategy would appear to have been to turn the car off entirely), but the obvious response is to try to save themselves and their passengers while trying not to hit anyone else. There is never going to be enough time or cognitive space to count the pets or establish that the potential victim on the sidewalk is a Nobel prize-winning physicist or debate whether the fact that he's already done his significant work means it would be better to hit him to spare the baby to the left.

Given the small amounts of time at its disposal in a crunch, I'm willing to bet that the software driving autonomous vehicles won't make these calculations either. Certainly not today, when we're still struggling to distinguish between a truck and a cloud. Ultimately, algorithms for making such choices will start with simple rules that then get patched by special interests to become a monster like our tax system. The values behind the rules will be crucial, which is the point of MIT's experiment.

The simplest possible setting is: kill the fewest people. So, now, do you want to buy a self-driving car that may turncoat to kill your child in a crisis, for the statistically greater good? These values will be determined by car and software manufacturers, and given their generally risk-averse lawyers it's more likely the vehicle will, as in Dexter Palmer's Version Control, try to hand off both the wheel and the liability to the human driver, who will then become, as Madeleine Elish said at We Robot 2016 the human-machine system's moral crumple zone.

There are already situations where exactly this kind of optimization takes place and this week we saw one of them at play in the case of the passenger dragged bleeding and probably concussed off United Airlines flight 3411 (operated by Republic Airline). Farhad Manjoo, writing in the New York Times, argues that the airlines' road to the bottom of customer service has been technology-fueled. Of course. The quality of customer service is not easily quantified for either airline or customer. But seat prices, the cost of food and beverages, traveler numbers, staffing levels, allowed working hours - all these are numbers that can be fed into an algorithm with the operational instruction, "Optimize for profits". Airline travel today is highly hierarchical; every passenger's financial value to the airline can be precisely calculated.

MIT's group focused on things like jobs (physicist, criminal), success (Nobel prize!), age, and apparent vulnerability (toddler, disabled...). A different group might calculate social value, asking how many people would be hurt - and how much - by your death? That approach might save the parents of young children and kill anti-social nerds and old people whose friends are all dead. Neither approach is is how today's algorithms value people, because they are predominantly owned by businesses, as Frank Pasquale has written in his book Black Box Society.

3695592_orig.jpegIt is the job of the programming humans to apply the ethical brakes. But, as has been pointed out, for example by University of Maryland professor Danielle Citron in her 1998 paper Technological Due Process, programmers are bad at that, and on-the-ground humans tend to do what the machine tells them. This is especially true of overworked, overscheduled humans whose bosses penalize discretion. Maplight has documented the money United has spent lobbying against consumer-friendly airline regulation. And so: even though it's incredible to most of us, despite a planeful of protesting passengers, neither the flight crew nor the Chicago Aviation Department stopped the proceedings that led to a person bleeding on the floor, concussed, with two broken front teeth and requiring sinus surgery because he protested like a lot of us would at being told to vacate the seat we'd paid for and been previously authorized to occupy. All backed by a 35,000-word contract of carriage, which United appears to have violated in any case.

Bloomberg recently reported that airlines make more money selling miles than seats. In other words, like Google and Facebook, airlines are now more multi-sided markets and less travel service companies. Even once-a-decade flyers see themselves as customers who should be "always right". Instead, financial reality, combined with post-consolidation lack of competition and post-9/11 suspicion, means they're wrong.

Cue Nicole Gelinas in City Journal, "The customer isn't always right. But an airline that assaults a customer is always wrong." The public largely agrees, so one hopes that what comes out of this is a re-accommodation of values. There's more where this came from.

Illustrations: United jet at Chicago O'Hare; MIT Moral Machine; Danielle Citron.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 7, 2017

Have robot, will legislate

Chaplin-ModernTimes.pngThe mainstream theme about robots for the last year or so has been the loss of human jobs. In 2013, Oxford Martin professors Carl Bendikt Frey and Michael Osborne estimated that automation would eliminate 47% of US jobs in the next 20 years. This year, McKinsey estimated 49% of work time could be automated - but slower. In 2016, the OECD thought only 9% of jobs were at risk in its 21 member countries. These studies' primary result has been thousands of scary words: the five jobs that will go first (which concludes no job is safe). Eventually, Bill Gates' suggested robots should pay taxes if they've taken over human jobs.

The jobs issue has been percolating since 2007, and it and other social issues have recently spurred a series of proposals for regulating robots and AI. The group of scientists who assembled last month at Asilomar released a list of 23 principles for governing robots. In January, a group at London's Alan Turing Institute and the University of Oxford called for an AI regulator to ensure algorithmic fairness. The European Commission has thought about legal status. Finally, the new research group AI Now has released a report (PDF) with recommendations aimed at increasing fairness and diversity.

Thumbnail image for werobot-pepper-head_zpsrvlmgvgl.jpgMore than in past years (2016's side event and main event, 2015, 2013, and 2012), the discussions at this year's We Robot legal workshop reflected this wider social anxiety rather than create their own.

For me, three themes emerged. First, that the fully automated intermediate systems of the future will be less dangerous than the intermediate systems that are already beginning to arrive. Second, that changing the lens through which we look at these things is valuable even if we dislike or disagree with the altered lens that's chosen. Finally, efforts to regulate AI may be helpful, but implemented in entirely wrong-headed ways. This is the threat We Robot was founded to avoid.

At the pre-We Robot event on the social implications of robotics at Brown University. Here, Gary Marcus argued that we will eventually have to build systems that learn ethics, though it's not clear how we can. With this in mind, he said, "The greatest threat may be immature pre-superintelligence systems that can do lots of perception, optimization, and control, but very little abstract reasoning." In columns such as this one from The New Yorker, Marcus has argued that the progress made so far is too limited for such applications; a claim with which a recent blog posting by data scientist Sarah R. Constantin concurs.

At We Robot itself, the paper by Marc C. Canellas et al examined how to frame regulation of hybrid human-automation systems. This is of particular relevance to"self-driving" cars (autocars!), where partially automated systems date to the 1970s introduction of cruise control. Last year, Dexter Palmer's fascinating novel Version Control explored a liability regime that, like last year's paper on "moral crumple zones" by Madeleine Elish, ultimately punished the human partner. As Canellas explained, most US states assign liability to the driver. Autocars present variable scenarios: drivers have no way to test for software defects, but may still deserve blame for ordering the car to drive through a blizzard.

The extent to which cars should be fully automated is a subject for debate already. There is some logic to saying that fully automated cars should be safer than partially automated ones: humans are notoriously bad at staying attentive when asked to monitor systems that leave them nothing to do for long stretches of time. Asking us to do that sort of monitoring is the wrong way round, since that's what computers, infinitely patient, are actually good at. In addition, today's drivers have thousands of hours of driving experience to deploy when they do take over. This won't be true 15 years from now with a generation of younger drivers raised on a steady diet of "let me park that for you". The combination of better cars but worse humans may be far more dangerous.

Changing the lens was the subject of Kristen Thomasen's paper, Feminist Perspectives on Drone Regulation (PDF). Thomasen says she was inspired by a Slate article by Margot Kaminsky that called out the rash of sunbathing teenager stories that are being used to call for privacy regulation. Thomasen's point was that the sunbathing narrative imposes a limited and paternalistic definition of privacy as physical safety and ignores issues like information imbalance. Diversity among designers is essential: even the simple Roomba can cause damage when designers fail to imagine people sleeping on the floor.

Both these issues inform the third: how to regulate AI. The main conclusion: many different approaches will be needed. Many questions arise: how much and what do we delegate to private actors? (See also fake news, trolling, copyright infringement, and the web.) Whose ethics do we embed? How do governments regulate companies whose scale - "Google scale", "Facebook scale", not "UK scale" - is larger than their own? And my own question: how do you embed ethics when the threat is the maker, not the machine, as will be the case for the foreseeable future?

Illustrations: Charlie Chaplin in Modern Times (1036); Pepper, at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.