Links concerning the Internet

  • Building a data culture: „self-service learning program to facilitate fun, creative introductions for the non-technical folks in your organization“
  • verwandt dazu: Data Playbook: „The Data Playbook (Beta) builds on social learning and modularized activities already developed to promote data literacy via workshops“
  • A manifesto for the Internet Age:

  • Immer wieder das gleiche Problem: Wer selbstlernende Algorithmen braucht, muss vorher Daten in guter Qualität haben. Bei der Gesichtserkennung heißt das: Viele, viele, viele Fotos mit Angaben zu Hautfarbe, Alter, Geschlecht und einer Menge anderer Eigenschaften. Und woher bekommen Firmen und Forscherinnen diese Bilder? Zum Beispiel durch Scrapen oder Bulk-Dateien der ehemaligen Foto-Plattform Flickr. Das hat zwei Probleme: Erstens ein Einbruch in die Privatssphäre. Zweitens können die Bilder dafür verwendet werden, Überwachungssoftware zu trainieren. Mehr: Facial recognition’s ‚dirty little secret‘: Millions of online photos scraped without consent

Die Zukunft suchen und nicht finden

Vorhersagen auf Basis von Berechnungen sind ein Spiel mit der Zeit.

Die Polizei ist auf einer Art Zeitreise. Sie will Verdächtige schon möglichst früh ausmachen – und das nicht nur im Flugverkehr. „Vor die Lage kommen“, nannte das der ehemalige BKA-Chef Jörg Ziercke.

Morgen ein Mörder

Und außerdem: Das Problem mit den Falsch-Positiven (SZ). Bei Prognose-Algorithmen kommt es nicht nur darauf an, welche Personen richtig erkannt werden, sondern auch darauf, welche Personen zu unrecht verdächtigt werden. Mathematisch nennt man diesen Fehler bei binären Entscheidungen (ja – nein, verdächtig, unverdächtig) die false-positve rate.

Bei der Gesichtserkennung am Berliner Bahnhof Südkreuz gibt die Polizei eine Falschtrefferrate von 0,1 Prozent an. Klingt wenig, die Zahl kleiner 1 führt aber in die Irre. In Wahrheit ist das ein unglaublich hoher, ein zu hoher Wert. Vanessa Wormer und Christian Endt rechnen das in der SZ vor:

  • Etwa 12 Millionen Bahnfahrer pro Tag
  • eine Falschpositivrate von 0,1 Prozent ergibt 12 000 unschuldig Verdächtige

Das ist das Problem an derartigen Systemen, die auf anlassloser Massenüberwachung fußen: Selbst bei sehr geringen Fehlerraten geraten ungleich viele Personen fälschlicherweise ins Visier der Fahnder.

Viele Daten, nix dahinter

Also, dieses Maschinelle Lernen ist ja überall. Aber ist es auch überall notwendig? Oder sind die Ergebnisse oftmals, nun ja, erwartbar. Und mit ein bisschen nachdenken und präzisen Algorithmen nicht mindestens genauso zu erreichen?

This is, by the way, the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic. There’s no magic here. If you use ML to teach a computer how to sort through resumes, it will recommend you interview people with male, white-sounding names, because it turns out that’s what your HR department already does. If you ask it what video a person like you wants to see next, it will recommend some political propaganda crap, because 50% of the time 90% of the people do watch that next, because they can’t help themselves, and that’s a pretty good success rate.

Das Zitat stammt aus einem Blogpost der Gattung „schöne Mischung aus Rant und Analyse“ und hat die These: Für Empfehlungsalgorithmen braucht’s jetzt dieses viele Daten sammeln eigentlich wirklich nicht.

Mehr aus Forget privacy: you’re terrible at targeting anyway:

Probably what it does is infer my gender, age, income level, and marital status. After that, it sells me cars and gadgets if I’m a guy, and fashion if I’m a woman. Not because all guys like cars and gadgets, but because some very uncreative human got into the loop and said „please sell my car mostly to men“ and „please sell my fashion items mostly to women.“ Maybe the AI infers the wrong demographic information (I know Google has mine wrong) but it doesn’t really matter, because it’s usually mostly right, which is better than 0% right, and advertisers get some mostly demographically targeted ads, which is better than 0% targeted ads.


You know this is how it works, right? It has to be. You can infer it from how bad the ads are. Anyone can, in a few seconds, think of some stuff they really want to buy which The Algorithm has failed to offer them, all while Outbrain makes zillions of dollars sending links about car insurance to non-car-owning Manhattanites. It might as well be a 1990s late-night TV infomercial, where all they knew for sure about my demographic profile is that I was still awake.

KI, ML, etc.: Es dauert noch und das ist ganz normal so

AI hier, AI da. Aber wer setzt es wirklich ein? MIT Technology Review macht den Realitätscheck und bestätigt vieles derjenigen, die sich praktisch damit befassen.

It’s one thing to see breakthroughs in artificial intelligence that can outplay grandmasters of Go, or even to have devices that turn on music at your command. It’s another thing to use AI to make more than incremental changes in businesses that aren’t inherently digital.

Denn Google, Amazon, Facebook, Netflix und die anderen großen Firmen haben extrem viele Mitarbeiter, die sich nur damit befassen und das Geschäftsmodell ist inheränt auf Daten ausgelegt. In anderen Branchen ist das nicht so.

Data scientists at IBM and Fluor didn’t need long to mock up algorithms the system would use, says Leslie Lindgren, Fluor’s vice president of information management. What took much more time was refining the technology with the close participation of Fluor employees who would use the system. In order for them to trust its judgments, they needed to have input into how it would work, and they had to carefully validate its results, Lindgren says.


To develop a system like this, “you have to bring your domain experts from the business—I mean your best people,” she says. “That means you have to pull them off other things.” Using top people was essential, she adds, because building the AI engine was “too important, too long, and too expensive” for them to do otherwise.

Es wird also, so das Fazit, noch etwas dauern, bis Künstliche Intelliganz und Maschinelles Lernen auch in nicht-Tech-Branchen in der Breite ankommt. Ungewönlich ist das nicht:

What (…) economists confirmed, is that the spread of technologies is shaped less by the intrinsic qualities of the innovations than by the economic situations of the users. The users’ key question is not, as it is for technologists, “What can the technology do?” but “How much will we benefit from investing in it?”

Links:

An economist’s view on AI

Michaela Schmöller, writes in „Secular stagnation: A false alarm in the euro area?“ for the Bank of Finland

It is important to note that productivity growth evolves in a two-stage process: the initial invention of new technologies through research and development, subsequently followed by technological diffusion, i.e. the incorporation of these new technologies in the production processes of firms. As a result, even though many important technology advances may have been invented in recent times, they will only exert an effect on output and productivity once firms utilise these technologies in production. Potential productivity gains from technologies that have yet to be widely adopted may be sizable. A central example is the field of artificial intelligence in which future productivity gains may be considerable once AI-related technologies diffuse to the wider economy. (…)

AI may represent — as did the steam engine, the internal combustion engine and personal computers — a general purpose technology, meaning that it is far-reaching, holds the potential for further future improvements and has the capability of spurring other major, complementary innovations over time with the power of drastically boosting productivity. Incorporating AI in production requires substantial changes on the firm-level, including capital stock adjustments, the revision of internal processes and infrastructures, as well as adapting supply and value chains to enable the absorption of these new technologies. Consequently, this initial adjustment related to the incorporation of general purpose technologies in firms‘ production may take time and may initially even be accompanied by a drop in labour productivity before delivering positive productivity gains.


Eine gute Beschreibung von Artificial Intelligence beim Economist

„One way of understanding this [Artificial Intelligence] is that for humans to do things they find difficult, such as solving differential equations, they have to write a set of formal rules. Turning those rules into a program is then pretty simple. For stuff human beings find easy, though, there is no similar need for explicit rules—and trying to create them can be hard. To take one famous example, adults can distinguish pornography from non-pornography. But describing how they do so is almost impossible, as Potter Stewart, an American Supreme Court judge, discovered in 1964. Frustrated by the difficulty of coming up with a legally watertight definition, he threw up his hands and wrote that, although he could not define porn in the abstract, “I know it when I see it.”

Machine learning is a way of getting computers to know things when they see them by producing for themselves the rules their programmers cannot specify. The machines do this with heavy-duty statistical analysis of lots and lots of data.“

Das Ende des Textes:

„But for now, the best advice is to ignore the threat of computers taking over the world—and check that they are not going to take over your job first.“

aus: Rise of the machines