Makes me curious what state R was at the time, or whatever else could've been useful for deep learning, and the benefits of a new language vs adapting something that exists. Seems like it was a big investment
replies(2):
penguins <- read_csv("penguins.csv") |>
na.omit() |>
select(species, island, bill_length_mm, body_mass_g) |>
group_by(species, island) |>
summarize(
mean_bill_length = mean(bill_length_mm),
mean_mass = mean(body_mass_g),
n = n()
) |>
arrange(species, desc(mean_bill_length))
penguins |>
ggplot(aes(x = species, y = mean_bill_length, fill = island)) +
geom_col(position = "dodge") +
labs(
title = "Mean Bill Length by Species and Island",
y = "Mean Bill Length (mm)"
) +
theme_minimal()
I recommend the article "Evaluating the Design of the R Language" [1] - it reads like a horror story. The memory usage and performance is abysmal, the OO features are a mess, and the semantics are very weird ("best effort semantics" is about as predictable as it sounds!). The lexical scoping is based on Scheme but has so many weird edge cases. It's a dumpster fire of a language, but it somehow works for its intended purpose.
(Survival analysis and multilevel modeling comes to mind.)