Kategorien
Politik + Wirtschaft

Gute Ökonomie ist…

… nach Esther Duflo and Abhijit Banerjee, frische Nobelpreisträger für Wirtschaft. Sie schreiben im Guardian:

But good economics is much less strident, and quite different. It is less like the hard sciences and more like engineering or plumbing: it breaks big problems into manageable chunks and tries to solve them with a pragmatic approach – a combination of intuition and theory, trial and acknowledged errors. Good economics starts with some facts that are troubling, makes some guesses based on what we already know about human behaviour and theories that have been shown to work, uses data to test those guesses, refines (or radically alters) its line of attack based on the new set of facts and, eventually, with some luck, gets to a solution.

If we’re serious about changing the world, we need a better kind of economics to do it (Guardian)

Kategorien
Data + Code

Mit Daten arbeiten und ihnen gegenüber skeptisch sein, ist kein Widerspruch – im Gegenteil

I’m a data scientist who is skeptical about data schreibt Andrea Jones-Rooy bei Quartz. Da gibts viel zu zitieren:

Whether it’s curing cancer, solving workplace inequality, or winning elections, data is now perceived as being the Rosetta stone for cracking the code of pretty much all of human existence.

But in the frenzy, we’ve conflated data with truth. And this has dangerous implications for our ability to understand, explain, and improve the things we care about.

(…)

“What does the data say?”

Data doesn’t say anything. Humans say things. They say what they notice or look for in data—data that only exists in the first place because humans chose to collect it, and they collected it using human-made tools.

Data can’t say anything about an issue any more than a hammer can build a house or almond meal can make a macaron. Data is a necessary ingredient in discovery, but you need a human to select it, shape it, and then turn it into an insight.

(…)

Data is an imperfect approximation of some aspect of the world at a certain time and place.

(…)
Companies—and my students—are so obsessed with being on the cutting edge of methodologies that they’re skipping the deeper question: Why are we measuring this in this way in the first place? Is there another way we could more thoroughly understand people? And, given the data we have, how can we adjust our filters to reduce some of this bias?

(…)

This doesn’t mean throw out data. It means that when we include evidence in our analysis, we should think about the biases that have affected their reliability. We should not just ask “what does it say?” but ask, “who collected it, how did they do it, and how did those decisions affect the results?”

We need to question data rather than assuming that just because we’ve assigned a number to something that it’s suddenly the cold, hard Truth. When you encounter a study or dataset, I urge you to ask: What might be missing from this picture? What’s another way to consider what happened? And what does this particular measure rule in, rule out, or incentivize?

Kategorien
Data + Code Medien + Internet

Links concerning the Internet

  • Building a data culture: „self-service learning program to facilitate fun, creative introductions for the non-technical folks in your organization“
  • verwandt dazu: Data Playbook: „The Data Playbook (Beta) builds on social learning and modularized activities already developed to promote data literacy via workshops“
  • A manifesto for the Internet Age:

  • Immer wieder das gleiche Problem: Wer selbstlernende Algorithmen braucht, muss vorher Daten in guter Qualität haben. Bei der Gesichtserkennung heißt das: Viele, viele, viele Fotos mit Angaben zu Hautfarbe, Alter, Geschlecht und einer Menge anderer Eigenschaften. Und woher bekommen Firmen und Forscherinnen diese Bilder? Zum Beispiel durch Scrapen oder Bulk-Dateien der ehemaligen Foto-Plattform Flickr. Das hat zwei Probleme: Erstens ein Einbruch in die Privatssphäre. Zweitens können die Bilder dafür verwendet werden, Überwachungssoftware zu trainieren. Mehr: Facial recognition’s ‚dirty little secret‘: Millions of online photos scraped without consent
Kategorien
Data + Code Medien + Internet

Viele Daten, nix dahinter

Also, dieses Maschinelle Lernen ist ja überall. Aber ist es auch überall notwendig? Oder sind die Ergebnisse oftmals, nun ja, erwartbar. Und mit ein bisschen nachdenken und präzisen Algorithmen nicht mindestens genauso zu erreichen?

This is, by the way, the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic. There’s no magic here. If you use ML to teach a computer how to sort through resumes, it will recommend you interview people with male, white-sounding names, because it turns out that’s what your HR department already does. If you ask it what video a person like you wants to see next, it will recommend some political propaganda crap, because 50% of the time 90% of the people do watch that next, because they can’t help themselves, and that’s a pretty good success rate.

Das Zitat stammt aus einem Blogpost der Gattung „schöne Mischung aus Rant und Analyse“ und hat die These: Für Empfehlungsalgorithmen braucht’s jetzt dieses viele Daten sammeln eigentlich wirklich nicht.

Mehr aus Forget privacy: you’re terrible at targeting anyway:

Probably what it does is infer my gender, age, income level, and marital status. After that, it sells me cars and gadgets if I’m a guy, and fashion if I’m a woman. Not because all guys like cars and gadgets, but because some very uncreative human got into the loop and said „please sell my car mostly to men“ and „please sell my fashion items mostly to women.“ Maybe the AI infers the wrong demographic information (I know Google has mine wrong) but it doesn’t really matter, because it’s usually mostly right, which is better than 0% right, and advertisers get some mostly demographically targeted ads, which is better than 0% targeted ads.


You know this is how it works, right? It has to be. You can infer it from how bad the ads are. Anyone can, in a few seconds, think of some stuff they really want to buy which The Algorithm has failed to offer them, all while Outbrain makes zillions of dollars sending links about car insurance to non-car-owning Manhattanites. It might as well be a 1990s late-night TV infomercial, where all they knew for sure about my demographic profile is that I was still awake.

Kategorien
Data + Code

KI, ML, etc.: Es dauert noch und das ist ganz normal so

AI hier, AI da. Aber wer setzt es wirklich ein? MIT Technology Review macht den Realitätscheck und bestätigt vieles derjenigen, die sich praktisch damit befassen.

It’s one thing to see breakthroughs in artificial intelligence that can outplay grandmasters of Go, or even to have devices that turn on music at your command. It’s another thing to use AI to make more than incremental changes in businesses that aren’t inherently digital.

Denn Google, Amazon, Facebook, Netflix und die anderen großen Firmen haben extrem viele Mitarbeiter, die sich nur damit befassen und das Geschäftsmodell ist inheränt auf Daten ausgelegt. In anderen Branchen ist das nicht so.

Data scientists at IBM and Fluor didn’t need long to mock up algorithms the system would use, says Leslie Lindgren, Fluor’s vice president of information management. What took much more time was refining the technology with the close participation of Fluor employees who would use the system. In order for them to trust its judgments, they needed to have input into how it would work, and they had to carefully validate its results, Lindgren says.


To develop a system like this, “you have to bring your domain experts from the business—I mean your best people,” she says. “That means you have to pull them off other things.” Using top people was essential, she adds, because building the AI engine was “too important, too long, and too expensive” for them to do otherwise.

Es wird also, so das Fazit, noch etwas dauern, bis Künstliche Intelliganz und Maschinelles Lernen auch in nicht-Tech-Branchen in der Breite ankommt. Ungewönlich ist das nicht:

What (…) economists confirmed, is that the spread of technologies is shaped less by the intrinsic qualities of the innovations than by the economic situations of the users. The users’ key question is not, as it is for technologists, “What can the technology do?” but “How much will we benefit from investing in it?”

Links:

Kategorien
Data + Code Kultur + Gesellschaft Medien + Internet

The illusion of the Cloud

  • „[The] “cloud” is a massive interconnected physical infrastructure which exists across the world.“
  • By using cloud services from Amazon, Google, Microsoft one can outsource one’s own infrastructure setup with all it’s challenges
  • now: Infrastructure-as-a-Service
  • super-cheap hosting with a price that depends on usage and is scalable
  • „The actual infrastructure at the heart of AWS’ infrastructure-as-a-service isn’t the thing that makes it important to developers; it’s the services and APIs built on top of that infrastructure.“ (Ingrid Burrington)

Links:

Kategorien
Data + Code Medien + Internet Politik + Wirtschaft

Pros and Cons of a Social Index

Heather Krause writes one of my favorite newsletter. She works at Datassist, a company working with NGOs and data journalists.

Recently, she wrote about social indices:

A social index is a summary of a complex issue (or issues). Generally, social indexes take a large number of variables related to a specific topic or situation and combine them to get one number. It’s often a single number, but can also be a rank (#1 country out of 180) or a category (“high performing”).

Heather Krause

Pros of social indices:

  • attract public interest
  • allow comparisons over time
  • provide a big picture
  • „reduce vast amounts of information to a manageable size“

Cons:

  • „disguise a massive amount of inequality in the data“
  • simplistic interpretations
  • hide emerging problems of some variables

So, should we use them?

Krause says, „yes“, but …

If we’re using an index to understand a trend or situation, we also need to look at the individual elements that make up that index.

Datassist published a list with various indicators here.

Kategorien
Politik + Wirtschaft

An economist’s view on AI

Michaela Schmöller, writes in „Secular stagnation: A false alarm in the euro area?“ for the Bank of Finland

It is important to note that productivity growth evolves in a two-stage process: the initial invention of new technologies through research and development, subsequently followed by technological diffusion, i.e. the incorporation of these new technologies in the production processes of firms. As a result, even though many important technology advances may have been invented in recent times, they will only exert an effect on output and productivity once firms utilise these technologies in production. Potential productivity gains from technologies that have yet to be widely adopted may be sizable. A central example is the field of artificial intelligence in which future productivity gains may be considerable once AI-related technologies diffuse to the wider economy. (…)

AI may represent — as did the steam engine, the internal combustion engine and personal computers — a general purpose technology, meaning that it is far-reaching, holds the potential for further future improvements and has the capability of spurring other major, complementary innovations over time with the power of drastically boosting productivity. Incorporating AI in production requires substantial changes on the firm-level, including capital stock adjustments, the revision of internal processes and infrastructures, as well as adapting supply and value chains to enable the absorption of these new technologies. Consequently, this initial adjustment related to the incorporation of general purpose technologies in firms‘ production may take time and may initially even be accompanied by a drop in labour productivity before delivering positive productivity gains.


Kategorien
Data + Code

Sorry, auch Datenanalysen sind nicht der Heilige Gral der Objektivität

Datenanalysen sind nicht neutral: Jede Entscheidung über Variablen oder Methodik ist schlussendlich auch eine inhaltliche Entscheidung. Das zeigt anschaulich eine Studie, über die das Spektrum Magazin schreibt:

Bekommen schwarze Fußballspieler häufiger rote Karten als Nicht-Schwarze? Das war die Frage, auf die Forscherinnen und Forscher 29 verschiedene Antworten gaben. Die Ergebnisse unterscheiden sich zum Teil deutlich und widersprachen sich auch. Und das, obwohl alle den exakt gleichen Datensatz zur Verfügung hatten.

Die Unterschiede ergeben sich zum Beispiel aus folgenden Punkten:

  • Was sind die Annahmen über die Verteilung der Daten?
  • Können sich Schiedsrichter und Spieler beeinflussen?
  • Sind rote Karten voneinander unabhängig?
  • Werden alle Variablen in die Analyse miteinbezogen? „Gut zwei Drittel der Teams hatten beispielsweise die Position des Spielers auf dem Platz berücksichtigt, aber nur drei Prozent die Gesamtzahl der Platzverweise, die ein Schiedsrichter verhängte.“

Und was folgt daraus? Sind Analysen nicht mehr zu trauen? Natürlich nicht, aber wie so oft hilft ein Bewusstsein, dass auch Datenanalysen keine in Stein gemeisselten Ergebnisse produzieren. Wie im Journalismus gilt auch hier: Transparenz erhöht die Glaubwürdigkeit.

The best defense against subjectivity in science is to expose it. Transparency in data, methods, and process gives the rest of the community opportunity to see the decisions, question them, offer alternatives, and test these alternatives in further research.

Studie „Many Analysts, One Data Set“

Hat eine schwarze Hautfarbe nun Einfluss auf Platzverweise? Zwei Drittel der Analysen sagen „ja“, ein Drittel „nein“.

via WZB Data Science Blog