Recently, myself and the rest of NYC went through a cold spell and news outlets reported2 the 13 day stretch of sub-zero weather, ending Feb 6th, was not longer than a 16 stretch in 1881. And this was shorter than a 1963 stretch, but tying a 2018-01-13 streak.

But this recent winter sure felt extreme. I know there is a recency bias, but I figured, why not also compare the area under the curve too. So I ranked the coldest 14-day stretches, using available data of the past decade. And then tried to visualize the spans of the coldest years too.

But even by that measure, it still looks like 2018 was colder.

Sourcing

Doing this all with a ChatGPT agent, that obtained the data from a weather api I hadn’t heard about before, open meteo1, so I figured let’s just plot all the data first as a sanity check.

Implementation note

The full source code is available here3.

from brr_cold.download import download_open_meteo
# download_open_meteo()
from brr_cold.winter import plot_full_timeseries, load_open_meteo_archive_json

weather = load_open_meteo_archive_json("data/archive.json")
df = weather.df
df.head()
shape: (5, 7)
datelow_temp_Fhigh_temp_Ffeels_like_low_Ffeels_like_high_Fsunrisesunset
datef64f64f64f64datetime[μs]datetime[μs]
2016-02-1513.340.55.735.62016-02-15 06:50:002016-02-15 17:30:00
2016-02-1632.052.724.645.52016-02-16 06:49:002016-02-16 17:31:00
2016-02-1729.042.621.333.92016-02-17 06:47:002016-02-17 17:32:00
2016-02-1823.934.914.624.52016-02-18 06:46:002016-02-18 17:33:00
2016-02-1921.135.313.226.92016-02-19 06:45:002016-02-19 17:34:00
plot_full_timeseries(df)

What to compare

I wanted to compare the area under the curve, but I didn’t want to mess around with negative numbers, so I figured it would be safer to use the highs and not the lows, since the Fahrenheit lows will have negatives, but in NY, spot checking, I didn’t see any days where there were highs that were negative Fahrenheit. This is not yet the arctic luckily!

So here is how I phrased the metric for comparison in my ChatGPT prompt, for using the feels like data and just the standard temperature data too.

w14_feels_like_high_F(t) = feels_like_high_F(t - 14) + feels_like_high_F(t - 13) + ... + feels_like_high_F(t - 1) 
w14_high_temp_F(t) = w14_high_temp_F(t - 14) + w14_high_temp_F(t - 13) + ... + w14_high_temp_F(t - 1)

Ranked spans

Initially, I was kind of shocked I didn’t see 2026 data anywhere in the rolling window data, but then I realized oops I was using data up to 2025-02-14 instead of 2026-02-14 yikes! Then I pulled with the additional year and yes, both 2026 and 2018 were filling up the coldest 14-day stretches.

from brr_cold.winter import add_rolling_14day_averages_excluding_today

df2 = add_rolling_14day_averages_excluding_today(df)
df2[-50:]
shape: (50, 9)
datelow_temp_Fhigh_temp_Ffeels_like_low_Ffeels_like_high_Fsunrisesunsetw14_high_avg_Fw14_feels_like_high_avg_F
datef64f64f64f64datetime[μs]datetime[μs]f64f64
2025-12-2717.530.29.822.42025-12-27 07:19:002025-12-27 16:35:0038.83571432.05
2025-12-2811.234.64.228.42025-12-28 07:19:002025-12-28 16:36:0038.22142931.364286
2025-12-2928.545.217.838.32025-12-29 07:19:002025-12-29 16:37:0038.35714331.442857
2025-12-3026.031.516.520.52025-12-30 07:20:002025-12-30 16:37:0039.67142932.835714
2025-12-3125.431.515.921.42025-12-31 07:20:002025-12-31 16:38:0039.93571432.814286
2026-02-1014.331.56.923.62026-02-10 06:55:002026-02-10 17:24:0022.88571414.871429
2026-02-1130.638.121.230.92026-02-11 06:54:002026-02-11 17:25:0023.79285715.907143
2026-02-1223.433.514.624.62026-02-12 06:53:002026-02-12 17:27:0025.09285717.271429
2026-02-1316.734.08.127.52026-02-13 06:52:002026-02-13 17:28:0026.23571418.392857
2026-02-1418.038.810.432.52026-02-14 06:51:002026-02-14 17:29:0027.57142919.907143
print(
    df2.drop_nulls()
    .sort("w14_high_avg_F", descending=False)
    [:10]
    .select("date", "w14_high_avg_F", "w14_feels_like_high_avg_F"))
shape: (10, 3)
┌────────────┬────────────────┬───────────────────────────┐
│ date       ┆ w14_high_avg_F ┆ w14_feels_like_high_avg_F │
│ ---        ┆ ---            ┆ ---                       │
│ date       ┆ f64            ┆ f64                       │
╞════════════╪════════════════╪═══════════════════════════╡
│ 2018-01-09 ┆ 22.035714      ┆ 10.592857                 │
│ 2018-01-08 ┆ 22.692857      ┆ 11.378571                 │
│ 2018-01-10 ┆ 22.807143      ┆ 11.642857                 │
│ 2026-02-10 ┆ 22.885714      ┆ 14.871429                 │
│ 2026-02-09 ┆ 22.992857      ┆ 14.935714                 │
│ 2026-02-07 ┆ 23.221429      ┆ 15.078571                 │
│ 2018-01-11 ┆ 23.628571      ┆ 12.721429                 │
│ 2026-02-08 ┆ 23.692857      ┆ 15.735714                 │
│ 2026-02-11 ┆ 23.792857      ┆ 15.907143                 │
│ 2026-02-06 ┆ 23.814286      ┆ 15.35                     │
└────────────┴────────────────┴───────────────────────────┘

Lets look at the coldest streak for each year

And let’s stack the coldest

from brr_cold.winter import stacked_plot_top5_coldest_years
stacked_plot_top5_coldest_years(df2)

Note about how this blog post was implemented

This blog post, unlike others in the past, was built purely from the jupyter notebook4 in git, using jupyter nbconvert, except for uploading the final images to my cdn, which hopefully I can automate too one day.

Most of the code was built through Chat GPT prompts except for some parts that were output as pandas that I converted to polars by hand.

References

  1. https://open-meteo.com/en/docs/historical-weather-api
  2. https://www.bbc.com/news/articles/cd9g8nxdexko
  3. https://github.com/namoopsoo/brrrrrr
  4. https://github.com/namoopsoo/brrrrrr/blob/main/2026-02-14--coldest.ipynb