Zur Haupt­na­vi­ga­ti­on sprin­gen [Alt]+[0] Zum Sei­ten­in­halt sprin­gen [Alt]+[1]

The Rise of the Ro­bots – Er­war­tungs­ho­ri­zont

  1. Mo­no­lo­gi­scher Teil

    Erste Auf­ga­be

    the “hype”

    • very ne­ga­ti­ve at­ti­tu­de towards AI: a dys­to­pi­an fu­ture vi­si­on in which human la­bour is no lon­ger re­qui­red (l. 3)
    • AI takes over your job/ your life and you lack the power to fight this (ll. 1-3)
    • ac­cor­ding to this view, AI will make our lives worse, it will con­trol our lives ins­tead of im­pro­ving them (with the ex­cep­ti­on of a small elite that will live in in­credi­b­le lu­xu­ry) (ll. 14-17)

     

    the aut­hor

    • is a “tech evan­ge­list” (l. 4), which means that she is of the opi­ni­on that tech­no­lo­gy and en­gi­nee­ring in ge­ne­ral can im­pro­ve our lives (ll. 4-6)
    • she is aware of the fact that the cur­rent chan­ges in the fiel­ds of data pro­ces­sing and ar­ti­fi­ci­al in­tel­li­gence can ac­tual­ly be re­gar­ded as an ir­re­vo­ca­ble tech­no­lo­gi­cal re­vo­lu­ti­on and that no one re­al­ly knows where this might be going and how ex­act­ly these data are going to be used (ll. 7-13)
    • she is also aware of the fact that these de­ve­lop­ments could make our lives worse (hu­mans as slaves of the tech­no­lo­gies that they have purcha­sed) (ll. 14-17)
    • yet, she does not share dys­to­pic vi­si­ons of an au­to­ma­ted world in which hu­mans are made ob­so­le­te (see above) be­cau­se she is con­vin­ced that we can make sure that we be­ne­fit from these de­ve­lop­ments by over­co­m­ing our fears, by ac­tive­ly en­ga­ging and de­fi­ning our re­la­ti­ons­hip with this new tech­no­lo­gy and es­pe­cial­ly by be­co­m­ing clear about the ques­ti­on who is in char­ge of this pro­cess (ll. 18-23)
    • the­re­fo­re, ac­cor­ding to the aut­hor the tech­no­lo­gi­cal re­vo­lu­ti­on that we are cur­rent­ly fa­c­ing is not­hing that we should be af­raid of but so­me­thing that we should en­ga­ge with to make sure it be­ne­fits us

     

    Zwei­te Auf­ga­be
    in­di­vi­du­el­le Schü­le­r­ant­wor­ten, mög­li­che As­pek­te:

    • the aut­hor is right when she does not join in the dys­to­pi­an hype of a world ruled by ro­bots
    • she is also right when she says that the cur­rent tech­no­lo­gi­cal re­vo­lu­ti­on is so­me­thing we can be­ne­fit from for our fu­ture lives [Bei­spie­le aus dem Un­ter­richt]
    • on the other hand, she is also right when she ad­mits that it can make our lives worse; howe­ver, this is not just about dys­to­pi­an fan­ta­sies of hu­mans ens­laved by tech­no­lo­gies; there are down­si­des to these new tech­no­lo­gies that are much more real [Bei­spie­le aus dem Un­ter­richt]
    • among these, the un­con­trol­la­bi­li­ty of what hap­pens to our data is among the most im­portant is­su­es
    • the­re­fo­re, I also think that we must de­fi­ne our re­la­ti­ons­hip with this new tech­no­lo­gy; this in­clu­des clear rules and boun­da­ries as to what hap­pens with our data [Bei­spie­le aus dem Un­ter­richt]
    • the ques­ti­on is whe­ther we will ac­tual­ly be able to do this in time or whe­ther tech­no­lo­gy is not al­re­a­dy ahead of po­li­tics; if we do not catch up, we might end up not being ruled by ro­bots, but by “giant web craw­lers” con­trol­ling our data and thus, to some extent, our lives
  2.  

  3. Dia­lo­gi­scher Teil

    Er­wei­te­rung des Um­felds der Auf­ga­be

    Der erste Im­puls soll die SuS dazu an­hal­ten, das zen­tra­le An­lie­gen der Au­to­rin, näm­lich den ver­ant­wor­tungs­vol­len Um­gang mit den neuen Tech­no­lo­gi­en üben, zu kom­men­tie­ren, so­fern dies nicht schon im Rah­men der zwei­ten Auf­ga­be zum mo­no­lo­gi­schen Teil ge­sche­hen ist.

    In ähn­li­cher Art und Weise soll der zwei­te Im­puls ver­tie­fend die Frage auf­wer­fen, die mit dem Pro­blem des ver­ant­wort­li­chen Um­gangs ge­kop­pelt ist: Wer ent­schei­det, wie ein ver­ant­wort­li­cher Um­gang aus­se­hen soll?

    1. Ex­plain how we can “face up to the re­s­pon­si­bi­li­ties of this new era”.
      • we need to be aware of the be­ne­fits but also the po­ten­ti­al risks of ar­ti­fi­ci­al in­tel­li­gence [Bei­spie­le aus dem Un­ter­richt, z. B. bias in fa­ci­al and voice re­co­gni­ti­on sys­tems]
      • we need to make in­for­med de­ci­si­ons as to which of these tech­no­lo­gies we let into our lives, kno­wing what kind of data we feed them (though this may not al­ways be pos­si­ble)
      • the de­ve­lop­ment of those tech­no­lo­gies can­not take place re­gard­less of ethi­cal ques­ti­ons and wi­thout re­gu­la­ti­ons, even if this may slow down the rate of in­no­va­ti­on
    2. The aut­hor poses the ques­ti­on of who is in char­ge when it comes to de­fi­ning our re­la­ti­ons­hip with tech­no­lo­gy. Ex­plain how you would an­s­wer this ques­ti­on.
      • AI de­ve­l­o­pers them­sel­ves have an ethi­cal ob­li­ga­ti­on: they must be trans­pa­rent in their ef­forts and in­ten­ti­ons and must make sure that their ef­forts are be­ne­fi­ci­al to hu­ma­ni­ty
      • si­mi­lar­ly, com­pa­nies that use these tech­no­lo­gies must com­mit them­sel­ves to cer­tain ethi­cal stan­dards
      • go­vern­ment re­gu­la­ti­on has to en­su­re trans­pa­ren­cy and human ac­coun­ta­bi­li­ty
      • since di­gi­tal tech­no­lo­gy is not li­mi­ted by na­tio­nal bor­ders, supra-na­tio­nal re­gu­la­ti­ons must be the basis for na­tio­nal ones
      • howe­ver, they must go bey­ond vol­un­ta­ry com­mit­ment and re­com­men­da­ti­ons; there must be en­force­ment me­cha­nisms, too
      • the EU, for ex­amp­le, has an am­bi­ti­on to lead the framing of po­li­cies go­verning AI glo­bal­ly

 

AI – Er­war­tung: Her­un­ter­la­den [odt][16 KB]

AI – Er­war­tung: Her­un­ter­la­den [pdf][162 KB]

 

Wei­ter zu Sexy – Text und Auf­ga­ben