Home Coffee making Algorithms make a lot of your decisions – and it might be right for you

Algorithms make a lot of your decisions – and it might be right for you

0


[ad_1]

By Kyle Mittan, University Communications

Today

Algorithms, which are essentially systems or processes that make a choice, have been around for ages. But they’re ubiquitous in the big data age, and now generally exist as mathematical formulas in the form of computer code.

There is a good chance that at least a few algorithms have helped you find this article.

After all, algorithms – which are essentially systems or processes that help make a choice – have been around almost always. But they have become ubiquitous with the rise of big data, and now typically involve mathematical formulas in the form of computer code.

Facebook uses an algorithm to deliver its news feed to nearly 3 billion users. Algorithms are what make Tesla cars drive themselves. And any Google search involves an algorithm that decides the order of the results.

Derek bambauer

Derek bambauer

Policymakers have long assumed that most people would rather not have a machine to make certain day-to-day decisions, such as whether someone deserves a bank loan or is liable for a traffic violation. But a new study by Derek bambauerUniversity of Arizona professor James E. Rogers College of Law finds that many people are perfectly happy to let a machine make certain decisions for them.

Bambauer, who studies internet censorship, cybersecurity, and intellectual property, worked in the IT field as a systems engineer before his legal career.

His new study, which is expected to be published in the Arizona State Law Journal in early 2022, aims to help legal scholars and policymakers understand the public’s perception of decision-making algorithms so they can regulate these algorithms more. in accordance with the views of consumers.

“We’re at a time when algorithms have power and potential, but there’s also a fair amount of fear about them,” said Bambauer, co-author of the study with Michael Risch, professor and vice-president. dean of the Charles Widger School of Law. at the University of Villanova.

This fear, he added, is probably exaggerated by legal scholars and policy makers.

“In general, I think Michael and I think technology tends to be more mundane – it doesn’t do the great things we thought it would do, and it doesn’t do the horrible things we thought it would. would, ”Bambauer said. “And, so, we thought people would go ahead and say, ‘We have to reform this,’ before asking, ‘How do people really feel? “”

The preference for algorithms was “really surprising”

To better understand what people think about the technology, Bambauer and Risch used an online survey to ask about 4,000 people if they would prefer a human or an algorithm to make one of four hypothetical decisions:

  • If the participant would receive a $ 10 to $ 20 gift card from a coffee
  • If the participant would be found responsible for a civil violation of the highway code
  • Whether the participant would be approved for a bank loan
  • Whether the participant would be included in a clinical trial for treatment for a disease from which he or she suffers

Study participants were randomly assigned to one of four scenarios and to a decision maker – human or algorithm. Participants also received information about the decision maker, such as its accuracy rate, the time it takes to make a decision, and the cost of using it. With this information, participants could then choose whether they wanted to switch to the other decision maker.

The study found that 52.2% of all participants chose the algorithm, while 47.8% chose a human.

Even knowing that the negative public perception of algorithms was likely oversold, the researchers were taken aback by their findings.

“We thought that if people were really nervous about algorithms, it would show up in this aggregate – that the percentage of people who chose algorithms would not only be less than 50%, but it would be statistically significantly lower,” said Bambauer. “But that 4% difference – although it doesn’t sound like much – is statistically significant, and it was really surprising.”

The researchers also found that:

  • Cheaper algorithms are more popular. In scenarios where the algorithm costs less than human, 61% of respondents chose the algorithm, but only 43% chose this option when the cost was the same.
  • If the stakes are high, humans turn to humans. The scenarios presented to study participants had consequences ranging from receiving a gift card for a coffee to paying several hundred dollars for a traffic ticket. The higher the stakes, the more often participants turn to humans.
  • Precision plays an important role in choosing a decision maker. If one decision maker had a better accuracy rate than the other, 74% of respondents chose the more precise option. But participants were roughly evenly divided over their choice of decision-maker when the human and the algorithm had nearly equal rates of accuracy.
  • Faster algorithms are more attractive to consumers. If the algorithm was faster, participants chose it 57% percent of the time. But if a human was just as fast, the human was chosen 48% of the time.

The “what” and the “how” for decision-makers

Bambauer said he hoped the study will lead policymakers to ask two key questions with difficult answers: “What should they do?” and “How should they do it?”

The “What should they do?” The question is difficult, in part because algorithms are used across a wide range of industries, which means that one size cannot fit all, Bambauer said. The algorithms also lack a certain level of transparency that regulators and consumers have come to expect, he added, because the most tangible form of algorithms is computer code, which looks like gibberish to the average person. .

“If Facebook released their algorithm tomorrow, no one would know what it is,” Bambauer said. “For most of us, that wouldn’t make a small difference.”

By searching for the “How should they do it?” In response, policymakers should avoid trying to simply regulate out-of-existence algorithms, Bambauer said. Besides not being in the public interest, outright banning social media companies from using algorithms is “literally impossible,” he said.

“Just showing things in chronological order is an algorithm,” he added. “There’s just no getting around it.”

Lawmakers would do well to look to the late 1980s, Bambauer said, when Congress passed a law requiring credit card companies to provide a cheat sheet summarizing the costs of their cards. Graphics containing this information, called Schumer’s boxes, were named after the representative at the time. Charles Schumer of New York, who sponsored the legislation.

This could serve as a model, Bambauer said, to educate consumers about the algorithms they use to make decisions. He said algorithm owners might be required to provide plain language facts about what their algorithms do, such as: “By using an algorithm we save you money” or, “By using an algorithm. algorithm, we make fewer errors “.

Bambauer and Risch offer a more in-depth analysis of their policy recommendations in a recent essay published on TechStream, a Brookings Institution website that covers technology policy.

While policy solutions to address algorithm flaws are not yet clear, Bambauer said the Schumer box shows lawmakers already have the tools to craft such solutions. He sees a future in which decision-making systems likely involve both humans and algorithms.

“The right thing to do,” he said, “is find a place where we should have the person and find where we should have the code.”

[ad_2]