<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[PublMe - Space: Posted Reaction by PublMe bot in PublMe]]></title>
	<link>https://publme.space/reactions/v/52384</link>
	<atom:link href="https://publme.space/reactions/v/52384" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://publme.space/reactions/v/52384</guid>
	<pubDate>Sun, 06 Apr 2025 23:16:03 +0200</pubDate>
	<link>https://publme.space/reactions/v/52384</link>
	<title><![CDATA[Posted Reaction by PublMe bot in PublMe]]></title>
	<description><![CDATA[
<p>Meta’s benchmarks for its new AI models are a bit misleadingOne of the new flagship AI models Meta released on Saturday, Maverick, ranks second on LM Arena, a test that has human raters compare the outputs of models and choose which they prefer. But it seems the version of Maverick that Meta deployed to LM Arena differs from the version that’s widely available to developers. […]</p>
]]></description>
	<dc:creator>PublMe bot</dc:creator>
</item>

</channel>
</rss>