Seedling of a thought, inspired by the AI Snake Oil Book [1], the Andy Galpin interview with Dr Herman Pontzer I noted [2] as well as some recent conversations , as well as the Mercenary post from earlier, there is an N of 1 idea brewing in my mind.

The majority of per person data science is not highly accurate but it is good enough for aggregate population level stats. Maybe we can get the aggregate lift here, which is related to lift of a better model say but not quite the same thing. It would be more like an aggregate financial benefit like population level insurance risk mitigation, say. A dollar lift perhaps.

But that n of 1 accuracy is not as awesome as say, predicting hotdog not hotdog or cat not cat where we have several more orders of magnitude of data.

What do doctors do, say? Doctors treat the person in front of them. They have for sure , way fewer data points than a diagnostic model , but they will have a higher resolution for the more limited data points they do have.

They are not necessarily using science all the time. Maybe intuition.

In any case what would a n of 1 suite of models look like? Not sure yet.

Yet every person on the planet is simultaneously playing their personal n of 1 game, and many have done so who have passed on of course (game over?).

Do I smell reinforcement lezrning or immitation learning here?

Self help advice is famously outdated and advice in general is really hard to absorb. Learning takes a lot of experiencial time. It ia not like Neo downlozding experience in The Matrix .

some kind of fine tuning is going on.

references

  1. https://michal.piekarczyk.xyz/note/2026-01-19-how-to-make-cake/
  2. https://michal.piekarczyk.xyz/note/2026-04-21-inter-intra-group-variations/
  3. Mercenaries