

It may be difficult to meet all your criteria, but worth browsing some of these:


It may be difficult to meet all your criteria, but worth browsing some of these:


Yes, the LLMs received credit for each level even if they didn’t complete the entire environment.
They have some replays of tasks on their website: https://arcprize.org/tasks
Here’s one where the human completed all 9 levels in 1458 actions, but the LLM completed only one level in 24 actions, then struggled for 190 actions until it timed-out, I guess. The LLM scored 2.8% because of the weighted average, I think. I didn’t take the time to all do the math, and I’m not sure if the replay action count is accurate, but it gives you an idea.
Human: https://arcprize.org/replay/0d461c1c-21e5-4dc8-b263-9922332a6485
LLM: https://arcprize.org/replay/cc821983-3975-4ae4-a70b-e031f6807bb0


You can really only judge fairness of the score if you understand the scoring criteria. It is a relative score where the baseline is 100% for humans – i.e. A task was only included in the challenge if at least two people in the panel of humans were able to solve it completely, and their action count is a measure of efficiency. This is the baseline used as a point of comparison.
From the Technical Report:
The procedure can be summarized as follows:
• “Score the AI test taker by its per-level action efficiency” - For each level that the test taker completes, count the number of actions that it took.
• “As compared to human baseline” - For each level that is counted, compare the AI agent’s action count to a human baseline, which we define as the second-best human action count. Ex: If the second-best human completed a level in only 10 actions, but the AI agent took 100 to complete it, then the AI agent scores (10/100)^2 for that level, which gets reported as 1%. Note that level scoring is calculated using the square of efficiency.
• “Normalized per environment” - Each level is scored in isolation. Each individual level will get a score between 0% (very inefficient) 100% (matches or surpasses human level efficiency). The environment score will be a weighted-average of level score across all levels of that environment.
• “Across all environments” - The total score will be the sum of individual environment scores divided by the total number of environments. This will be a score between 0% and 100%.
So the humans “scored 100%” because that is the baseline by definition, and the AIs are evaluated at how close they got to human correctness and efficiency. So a score of 0.26% is 1/0.0026 ~= 385 times less efficient (and correct) compared to humans.


The goal of the ARC organization is to continually measure progress towards AGI, not come up with some predictive threshold for when AGI is achieved.
As long as they can continue to measure a gap between “easy for humans” and “hard for AI”, they will continue releasing new iterations of this ARC-AGI challenge series. Currently they do that about once a year.
More detail about the mission here: https://arcprize.org/arc-agi


It’s true that frontier models got better at the previous challenges, but it’s worth noting that they’re still not quite at human level even with those simpler tasks.
Also, each generation of the challenge tries to close loopholes that newer models would exploit, like brute-forcing the training with tons of synthesized tasks and solutions, over-fitting to these particular kinds of tasks, and issues with the similarities between the tasks in the challenge.
A common strategy in past challenges was to generate thousands of similar tasks, and you can imagine the big AI companies were able to do that at massive scale for their frontier models.


There’s a column linking to replays in the table of tasks here: https://arcprize.org/tasks


This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it’s one or two orders of magnitude less than the LLMs.
Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved


ARC-AGI-3 Launch event - Shared publicly live on March 25 in San Francisco at Y Combinator HQ, featuring a fireside conversation between François Chollet (creator, ARC-AGI) and Sam Altman (CEO, OpenAI) on measuring intelligence on the path to AGI.
François Chollet is a software engineer, artificial intelligence researcher, and former Senior Staff Engineer at Google. Chollet is the creator of the Keras deep-learning library released in 2015.


“I KNOW YOU’VE TOLD US THAT THESE MICRO-SHELTERS ARE MUCH BETTER THAN THE ENCAMPMENTS, AND YOUR LIVES HAVE IMPROVED REMARKABLY IN JUST SIX WEEKS, BUT WE’RE SHUTTING IT DOWN BECAUSE ZOMBIECYBORGFROMOUTERSPACE INSISTS THAT WE CAN’T GIVE YOU THIS SOLUTION UNTIL WE’VE IMPLEMENTED AN ABSOLUTELY PERFECT SOLUTION, AND THAT WILL PROBABLY TAKE A FEW YEARS, IF YOU’RE LUCKY. SORRY!”
Nothing wrong with having higher expectations, but you don’t have to shit on a good thing in the mean time.


It’s a fair question, but the cost might include land rental, property taxes, and salaries for support staff. It’s not just the physical housing that’s important. What makes it successful is the services available to the residents. I think it’s worth digging into the financials, but I don’t think it’s fair to assume that it’s $115k just to build each unit.
https://hatchetmedia.substack.com/podcast