AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.
Elon Musk, Technology Entrepreneur, and Investor.
The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.
Stephen Hawking
As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.
Amit Ray, Famous AI Scientist, Author
Learning Objectives:
  • c. and 2.c

[to understand classification problems, and understand the role training data plays in classification accuracy]

Instructions:

  • Connection: What were the three components of AI that we talked about
    • Dataset, Learning Algorithm, and Prediction
  • We are going to learn about training datasets and focus on the “Learning Algorithm” part of AI more. We will be talking about one very common form of AI; Supervised Machine Learning.
  • Give examples to explain it
    • A parent teaching a baby colours.
  • Regression and Classification
    • Regression is trying to predict a specific numerical value for that dataset.
    • Classification is where you try to predict which category a new piece of data belongs to.
  • Ask about examples of classification
    • Prompts: they learnt about animal classification in class, or how books and music are classified into genres.
  • Now I will talk about a few examples of classification technology(slides)
    • Handwriting detection
    • Spam detection
    • Face detection
  • Building the classifiers
    • Demo the teachable machine
      • Highlight the difference between training data and test data
    • Then they build their own classifier (they use hands and face)
    • They do this for 5-8 mins
    • Questions:
      • What happens if you train only one class?
      • What happens as you increase your dataset?
      • What happens when your test dataset is different from your training dataset?
    • Building the car and dog classifier then testing it
      • How are your classifiers working?
        • It works better for cats
          • Why? There are more cat pictures and as for the dog pictures, they were mostly realty fluffy and cat-like and were not diverse.
        • Prompts:
          • Is this classifier useful it only works well on just cats?
          • Why do you think it works better on cats?
          • How do we make it better with our training data?
        • When algorithms, specifically AI systems, have outcomes that are unfair in a systematic way, we call that algorithm bias. The cat-dog classifier is biased towards cats and against dogs.
        • Now students re-curate their datasets
        • How are the classifiers performing now?
          • What did you do to make it work better?
            • If they say less data, ask them if less is better?
          • We have seen algorithmic bias in our supervised machine learning systems. Now we will look at algorithmic bias in real world [VIDEO – GENDER SHADES]
          • What problems did Joy identify in the video?
            • Ask if the technology worked the same for all people
          • Why is this a problem?
            • Unequal use experience between groups
            • “Technology doesn’t work unless it works for everyone”
            • “If you wanted to be able use snapchat filters, is it wrong if you did not all have access to that technology?”
          • How does Joy suggest we can fix this problem?
            • Better data set curation
            • Prompt ® how did you improve your classifier?
  • How might you find images to better curate your dataset?
    • Social Media pictures?
    • Mugshots
    • ID cards
    • Which of these would be okay to use as sources of information?
    • Which might lead to more bias?

Which might break privacy?