Started reading, preview, AI Snake Oil. Reading retelling of what a developer Thomad Ptacek received from ChatGPT about how a biblical verse would explain removing a peanut buttee sandwich from a VCR.
The below makes me think of the Russel Peters standup routine where he demonstrates how a particular man would stereotype explain how to bake a cake.
“Fear not, my child, for I shall guide thy hand and show thee the way. Take thy butter knife, and carefully insert it between the sandwich and the VCR, and gently pry them apart. And with patience and perseverance, the sandwich shall be removed, and thy VCR shall be saved.”
In other words, mildly entertaining? , highly confident? useless? sure let’s pick all three 😂.
But in this case, as far as the societal implications suggested, we’re talking biblical proportions!
Also what comes to mind was this bus I noticed, during covid times,
crisis of replicability
p 23 Preach! Several examples of flaws in ML models and attempts to address them in research falling on deaf ears. hmm “self-correction” is the preference?
Reading many examples, including a “next music hit” predictor that failed to separate train test split validation. And a “has covid” predictor where positive and negative examples were split evenly as children and adults so models just learned “is child” insteaf of “has covid”. Wow
lack of AI product audits
p24 hiring: Pymetrics, HireVue, in the dog house: no public audits for “does it work?” only for “are the models biased w.r.t. demographics.
p25 Sounds like FTC has stepped up in 2023, noticing unjustified claims on AI products. Cool, that is their wheelhouse.
p26 Social Sentinel: student ptotest surveillance disguised as threat detection, oops.
Retention models
p36 , crazy story about University Mount St Mary’s, using a retention model, not to help people succeed but to identify students likely to drop out and get them out early before they hurt the school’s statistics.
But wouldn’t it at this point be more fair to say clearly any technology can be used for good or evil, before we even understand if this model was accurate or not.
And going even deeper, the fact that university retention rates get tracked like this might not be great either and end up creating the perverse incentives we see.